Thursday 25 April 2024

Editing in the Cloud Is Easy (As Long As You Have the Right Speed, Storage and Strategy)

NAB

There’s an old tech industry joke that “the cloud” is a fancy way of saying “somebody else’s computer.” That’s a bit of an oversimplification since cloud computing services are a lot more involved than just providing access to a server someone else owns.

article here

But the fact remains that the primary attribute of cloud computing is accessing computing resources — software applications, servers, data storage, development tools, and networking functionalities remotely over the internet.

Increasingly that means everyday post-production processes and crafts like editing too. Like much of post-production activity, the real shift to cloud came with COVID. If productions were to continue behind closed doors then remote and collaborative ways to continue the job had to be found.

While many facility managers and editors found those ad hoc attempts at the start of 2020 to be just about workable, the way the technology was proven to work opened up people’s minds to the benefits of more permanent cloud-based editing.

Today, at the very least, hybrid work-office scenarios are common, with cloud-based workflows no longer considered unusual across all genres ranging from live news and sports to feature animation, scripted TV and documentaries.

In a series of primers (ostensibly to promote its cloud storage), LucidLink explains cloud video editing and outlines the benefits it can offer.

Much of what the company has to say will be familiar to industry pros, but there’s a no-nonsense clarity for anyone unsure.

Cloud video editing refers to workflows that leverage the cloud rather than on-premise infrastructure. Editors can share their data with the complete toolset of a desktop-based NLE such as Adobe Premiere, Avid or DaVinci Resolve. The key difference is that the data itself is stored in the cloud, rather than on local devices. With the right software, cloud-based video editing can also include tools installed on virtual machines that perform parts of an editing workflow.

One of the chief benefits of working this way is remote collaboration. Since cloud-based systems and storage are inherently accessible from anywhere in the world, this enables both hybrid and fully remote workflows for editing teams.

Configured correctly (and the article doesn’t particularly delve into the cost of cloud storage and data transfer which vary greatly depending on facility needs), cloud can save time and cash.

“Although the cloud offers clear advantages when it comes to smaller files (like low-res video proxies), until recently handling large files was an unsolved challenge for cloud video editors due to lengthy upload and download times,” LucidLink notes, before offering its tech as a solution.

There’s also a look at the merits of cloud versus on-premise set-ups with files residing in a SAN or NAS system within a facility.

This latter approach, says the vendor, “requires copying large files to hard drives or using file transfer services if collaboration requires working with freelance talent in other locations other than at the facility itself.

“Even when working with large amounts of raw video data, editors often need to search, analyze and tag files, preferably in real time. The larger the file, the longer it takes to download, upload, render, or share.  Beyond the costly hardware investment, these systems still don’t solve the problem of waiting for files to download or distribute.”

However, it’s not usually a zero sum game. Most facilities at this moment in time prefer to retain one foot in both camps, in part as a safety net for data loss.

There are of course lots of choices when it comes to storage and the right strategy is vital for any production, says LucidLink.

“On-prem SAN and NAS systems can be very performant, but those benefits only exist in one location: a facility. The need to collaborate anywhere however is not addressed by these legacy approaches. This is where a cloud-based approach comes in.”

As we have seen from the recent NAB Show, more and more vendors are offering cloud based workflows. These increasingly start from the camera, where proxies are uploaded directly via the internet to some form or media management platform, and from which authenticated users anywhere can download or stream files to work from.

In a few years, looking back at the heavy duty power hungry monoliths of Silicon Graphics machines, Quantel boxes, or Autodesk hardware, we will wonder just how we ever worked without the internet.

AI Is Definitely Changing (But Not Destroying) Hollywood

NAB

The current consensus appears to be that generative video is not yet a Hollywood-killer and perhaps never will be. While AI is creeping into production, it is doing so to augment certain workflows or make specific alterations with no sign of it being used to auto-generate entire feature films or push creatives out of a job. But it’s still the early days.

article here

“It’s a fraught time because the messaging that’s out there is not being led by creators,” said producer Diana Williams, a former Lucasfilm executive now CEO and co-founder of Kinetic Energy Entertainment at the 2024 SXSW panel, “Visual (R)evolution: How AI is Impacting Creative Industries.”

Certainly, AI is a disruptive technology, but M&E of all industries should be used to taking tech change on board.

Julien Brami, a creative director and VFX supervisor at Zoic Studios, spoke on the panel with Williams, as Chris O’Falt reports at IndieWire. Brami said the common thread with each tech disruption is that filmmakers adopt new tools to tell stories. “I started understanding [with AI] that a computer can help me create way faster, iterate faster, and get there faster.”

Speed. That’s what you hear, over and over again, as the real benefit of Gen AI imaging, writes O’Falt who spoke to numerous filmmakers about the topic.

“Few see a viable path for Gen AI video to make its way to the movies we watch. Using AI is currently the equivalent of showing up on set in a MAGA hat.”

Finding actual artists who are willing to use AI tools with some kind of intention is tough, agrees Fast Company’s Ryan Broderick. Most major art-sharing platforms have faced tremendous user backlash for allowing AI art, and there’s even a new technology called Nightshade that artists are using to block their images from training generative AI.

Graphic designer and digital art pioneer Rob Sheridan tells Fast Company that the backlash against AI tech in Hollywood is directly caused by both tech companies and studios claiming that it will eventually be able to spit out a movie from a single prompt. Instead, Sheridan says it’s already obvious that AI technology will never work without people who know how to integrate it into existing forms of art, whether it’s a poster or a feature film.

“The thing that is hurting that progress — for this to kind of fold into the tool kit of creators seamlessly — is this obnoxious tech bubble shit that’s going on,” he says. “They’re trying to con a bunch of people with a lot of money to invest in this dream and presenting this very crass image to people of how eager these companies are, apparently, to just ditch all their craftspeople and try out this thing that everyone can see isn’t going to work without craftspeople.”

Media consultant Doug Shapiro tells Fast Company that AI usage will increase in Hollywood as studios grow more comfortable with the tech. He also suspects the current backlash against using AI is likely temporary.

“There’s this kind of natural backlash that tends to ease over time,” he says. “It’s going to get harder and harder to tell where the effects of humans stopped, and AI starts.”

Generative AI is cropping up most commonly in relatively small-stakes instances during pre- and post-production.  “Rather than spend a ton of money on storyboarding and animatics and paying very skilled artists to spend 12 weeks to come up with a concept,” Shapiro adds, “now you can actually walk into the pitch with the concept art in place because you did it overnight.”

Studios have also begun using AI to touch up an actor’s laugh lines or clean up imperfections on their face that might not be caught until after shooting has wrapped. In both cases, viewers might not necessarily even know they’re looking at something that has been altered by an AI model.

David Raskino, co-founder and CTO of AI developer Irreverent Labs, suggests to Will Douglas Heaven at MIT Technology Review that GenAI could be used to generate short scene-setting shots of the type that occur all the time in feature-length movies.

“Most are just a few seconds long, but they can take hours to film,” Raskino says. “Generative video models could soon be used to produce those in-between shots for a fraction of the cost. This could also be done on the fly in later stages of production, without requiring a reshoot.”

AI is putting filmmaking tools in the hands of more people than ever and who can argue that’s not a good thing?

Somme Requiem, for example, is a short film about World War I made by Los Angeles production company Myles. It was generated entirely using Runway’s Gen 2 model then stitched together, color-corrected, and set to music by human video editors.

As Douglas Heaven points out, “Myles picked the period wartime setting to make a point. It didn’t cost anywhere near the $250 million of Apple TV+ series Masters of the Air, nor take anywhere like as long as the four years Peter Jackson took to produce World War I doc They Shall Not Grow from archive video.”

“Most filmmakers can only dream of ever having an opportunity to tell a story in this genre,” Myles’ founder and CEO, Josh Kahn, says to MIT Technology Review. “Independent filmmaking has been kind of dying. I think this will create an incredible resurgence.”

However, he says, he believes “the future of storytelling will be a hybrid workflow,” in which humans make the craft decisions using an array of AI tools to get to the end result faster and cheaper.

Michal Pechoucek, CTO at Gen Digital, agrees. “I think this is where the technology is headed,” he says. “We’ll see many different models, each specifically trained in a certain domain of movie production. These will just be tools used by talented video production teams.”

A big problem with current versions of generative video is the lack of control users have over the output. Producing still images can be hit and miss; producing a few seconds of video is even more risky. Its why humans will need to be involved. But, of course, as you read this OpenAI’s Sora just got better and better.

“Right now, it’s still fun, you get a-ha moments,” says Yishu Miao, CEO of UK-based AI startup Haiper. “But generating video that is exactly what you want is a very hard technical problem. We are some way off generating long, consistent videos from a single prompt.”

 


When It Comes to Creativity, Humans and Machines Can Co-Exist (But It Won’t Always Be Easy)

NAB

As uncertainty and debate heats up about the impact of generative AI on the creative arts, Adobe wants to defend its own position in developing AI tools integrated in Photoshop and its own GenAI model Firefly while also reinforcing its brand as a champion of the creator.

article here

“I truly believe that to be human is to be creative, that creativity is a core part of who we are, whether or not you consider yourself a creative person,” says Brooke Hopper, principal designer for emerging design for AI/ML at Adobe, speaking to Debbie Millman, designer and educator at The School of Visual Arts and host of the podcast Design Matters.

While AI is in its baby steps phases it is easier to cling onto the idea that creativity is essential to what it means to be human. As AI technology advances however what passes for art, imagination or the lived experience of someone may be indistinguishable from the machine.

“It’s our emotions, our point of view, our life experiences,” says Hopper. “It’s spontaneity, it’s deciding when and where to break the rules. And so I do think that there’s a coexistence of humans and machines [where] humans do what humans are good at and ultimately, the machines are learning from us.”

Adobe can speak with a position of some strength here since it took a decision several years ago to support and build a pathway for tracking how AI has altered images and video content while training its own AI tools on data that it owns or has been cleared for use by third parties.

“We have to give them that data,” she says, while also anthropomorphizing the machines. “They’re not at this point in time making up data on their own. They’re simply taking the data day that we feed them, breaking it down, and then recreating it from noise.”

Hopper acknowledges the issues that come from feeding the machine data that humans have created.

“One thing to remember is these machines rely on information that humans put out into the world. And humans are biased, whether we try to be or not, we are. Therefore the machines are. We need to do things in order to mitigate that bias.”

Adobe advocates training AI on data that is licensed, with verified ownership, and that isn’t copyrighted. It began the Content Authenticity Initiative in 2019 to help avoid some of the deepfake issues that are now surfacing with regularity.

By embedding metadata in the content that’s being created and being able to tag content with “do not train” credentials, it hopes to “actively pursue ways that we can make sure that there are artists protections and that creators are being protected.”

As other Adobe execs have indicated, there is only so much responsibility a supplier of AI tools, is prepared to accept. Consumers need to accept their fair share of responsibility for quizzing whether content is “faked.”

Hopper says, “The general population [should be] educated about how to spot a deep fake, how to know if a website is not secure, because the technology to create deepfakes is getting better. And unfortunately, it’s the same technology that’s helping people create new and different content.”

Hopper backs moves by Congress and other state bodies to enshrine protections from deepfakes in law. “But if nothing is done legally, then morality is always a little bit of a slippery slope.”

The argument from Adobe is that human creativity will never be usurped by AI; that it remains a tool to be used as part of a human led creative process. Hopper is an artist herself, and says she would like to use GenAI tools to help her print 3D designs.

“That’s not to say that I’m going to become a professional 3D artist by any means, but allows me to work in medium and media that I wouldn’t be able to, or would struggle to previously. And that’s what I’m super excited about.”

Generative AI, she insists, is “super useful” within the ideation phase. “Imagine being able to generate even more ideas and more different directions to be able to come to such a better end goal.”

And the one thing that differentiates human from AI generated content, she says, is our own ability to break the rules.

“Machines don’t know when and how to break rules. They follow the rules. So that’s what we lean into. One of the biggest design principles is you have to learn the rules in order to break the rules and breaking the rules is what makes something creative and enjoyable. It’s that serendipitous rule breaking that that feeds into creativity.”

Just now, your basic GenAI tool cannot “think.” It will spew infinite versions, each one different, of an input we give it, based on data we give it. That may well change. But Adobe and Hopper look on the bright side. What else can they do?

“In the next 10 years we’re going to see an explosion of more creativity and content and, I think, more awareness. I’m really excited about the possibilities of more immersive design and experiences,” Hopper says. “Like, what happens when you’re potentially interacting with the artist in [a gallery or museum] piece, or you become part of the piece?”

 


Ad dollars accelerate to digital video with CTV, social big gainers

StreamTV Insider

article here

Advertising spend on connected TV platforms in the U.S soared to a record $20 billion in 2023 and is anticipated to hit $22.7 billion this year, representing 12% growth, as money follows viewers in continuing to switch from linear TV. That would make the CTV market 35% larger than that of online video (OLV) in 2024 according to the IAB’s latest annual Digital Video Ad Spend & Strategy Report.

Its figures also show that advertisers value social media equally if not more than CTV as spend on social video is expected to rise 20% for the second successive year to $23.4 billion in 2024.

All digital categories are rising with the tide as digital video, and CTV and social video in particular, continue to be the top channels for advertisers looking to reach engaged audiences at scale across the purchase funnel.

Total digital video ad spend across CTV, social, and online is projected to grow 16% this year — nearly 80% faster than total media overall – as combined spending is estimated to reach $62.9 billion, per the IAB report.

Digital video in 2024 is projected to have a 52% share of the total market for ad dollars over linear TV, surpassing the latter as share has shifted nearly 20 percentage points to digital video form linear TV in the last four years.

While dollars flowing into CTV primarily come from reallocations – particularly linear TV and other traditional media —31% of revenues come from overall expansion of advertising budgets.

“Among the largest ad spenders, CTV (69%) and social video (70%) are considered ‘must buys’ because of their ability to deliver both scale for branding at the top of the funnel and performance outcomes at the bottom of the funnel,” said Chris Bruderle, VP of Industry Insights & Content Strategy at IAB, in a statement.

Per a 2024 Advertiser Perceptions’ study quoted by the IAB, a third of advertisers identify their “general digital budgets” as a key funding source for CTV.

This year’s report also finds that most brand categories are projecting double-digit growth year on year with consumer packaged goods (CPG) and retail leading the way. Both CPG and retail digital video ad spend are expected to grow at 20% and 30% respectively, while also generating the largest total ad spend.

The report concludes that CTV and social video continue to be valued as audience-addressable channels that deliver both scale for branding at the top of the funnel and performance outcomes at the bottom of the funnel.

“Advertisers go where consumers are, and today that means digital video,” said David Cohen, CEO, IAB. “The challenge ahead is this: in a crowded landscape, who can deliver the best viewing experience, with the best content choices and the most innovative advertising options? That competition is ultimately good for consumers and good for the industry.”

The IAB partnered with Guideline which leveraged ad billing data, other market estimates and an IAB -commissioned Advertiser Perceptions quantitative survey of TV/digital video ad spend decision -makers to generate the results.

A second part of the report, to be released on July 15th, will address strategies driving activation and measurement.

 


Vox Media’s Marty Moe joins TMB to expand omnichannel reach

StreamTV Insider

article here

Trusted Media Brands (TMB), the company behind several lifestyle and entertainment-oriented free ad-supported streaming TV (FAST) channels, has appointed former Vox Media Studios boss Marty Moe as company president.

Moe will oversee TMB’s global web, social, and streaming businesses across brands that include Family Handyman, The Pet Collective, and Reader’s Digest.

Since 2015 TMB has moved away from is print roots as Readers Digest Association into digital then an omnichannel approach under the leadership of CEO Bonnie Kintzer.

It now claims 2.5 billion monthly views on social media, has more than 250 million followers on social channels including YouTube and TikTok, and that its content reaches 200 million consumers worldwide.

Last year TMB said its programming had amassed 11 billion minutes of watch time on FAST platforms.

Deals with Peacock, LG, Roku, Apple TV and Samsung TV among other CTV platforms as well as with out-of-home platform Atmosphere have helped streaming viewership grow.

Moe joins the company “at an exciting time” Kintzer said in a release, “with streaming viewership up 16% year over year, brands like The Pet Collective hitting their highest earning months on Facebook and YouTube, and opportunity across our web properties.”

Moe spent 13 years at Vox Media where he oversaw the strategic and operational direction of the company's editorial, sales, television/film, and podcasting divisions. Prior to Vox, Moe was SVP at AOL overseeing the finance, news, and information group. He later joined SB Nation as chief content officer, which then became Vox Media where he co-founded The Verge, a leading technology news website.

TMB is a founding member of the Independent Streaming Alliance (ISA), a collective that also includes, Tastemade, Vevo and Scripps, among others, that aims to promote the group’s ad-supported video streaming services to platforms, advertisers, and regulatory bodies.

Other TMB brands include Taste of Home, People Are Awesome and Birds & Bloom.

Moe added, “With proven growth across streaming and social platforms and new opportunities for innovation across search and web, it’s an incredible time to dive in - we are just scratching the surface of the opportunity.”

 


What Happened to the Promise of Our Internet Utopia?

NAB

There’s a conspiracy theory that the web is effectively dead, and made up primarily of bots and generated content. While that may not already be true, the AI companies seem determined to make it a reality.

article here

So says Paris Marx, writer of the Disconnect newsletter and host of the Tech Won’t Save Us podcast in coruscating critique of the death of the open internet at the hands of corporate greed.

In a blog post, he claims that the digital revolution has failed. This is the idea, enshrined by liberal thinkers at the birth of the World Wide Web in the mid-1990s, that the virtual world would be a place for equals “free from the burdens of race, sex, or wealth,” and from the dictates of government or business.

Instead, this utopia has been slowly strangled by its own “platformization.” Marx concedes that the compartmentalization of the web much easier for billions more people to get online but it created huge power and wealth in the hands of a few.

Google and Facebook get it with both barrels. “The greed of those two companies has sent news media spiraling, with lower ad revenue leading to successive layoffs that reduce the quality of the journalism they publish while their websites are stuffed with poor quality ads if not locked behind a paywall altogether,” Marx writes.

Amazon doesn’t escape either and neither do streamers, but compared to where we are now, “we look back on those times as the good old days, before the ambitions of tech companies vastly expanded and the pressure for profit accelerated the degradation of what they’d built,” he recalls.

“Everything must be sacrificed on the altar of tech capitalism,” he says, of which AI is the nadir.

It is corrosive to society, culture, politics and anathema to us as human beings, he suggests.

“The effort to route as many interactions as possible through apps and make our smartphones as addictive as possible has spawned an epidemic of loneliness and even social disconnection.”

AI tools are emphatically not beginning of a vast expansion in human potential, says Marx. “They’re not intelligent or prescient; they’re just churning out synthetic material that aligns with all the connections they’ve made between the training data they pulled from the open web.

Once again, the push to adopt these AI technologies isn’t about making our lives better, it’s about reducing the cost of producing ever more content to keep people engaged, to serve ads against, and keep people subscribed to struggling streaming services. The public doesn’t want the quality of news, entertainment, and human interactions to further decline because of the demands of investors for even greater profits, but that doesn’t matter.”

He calls out the massive (hidden) energy, water, and mineral cost of running all the data centers behind AI tools to show “how little the proliferation of AI tools has to offer us.”

So what’s Marx going to do about it? Well, to paraphrase his 19th century namesake, we need to tear down the machine and build us all a new one.

“Another internet is possible,” he says. “The time for tinkering around the edges has passed. The only hope to be found today is in seeking to tear down the edifice the tech industry has erected and to build new foundations for a different kind of internet that isn’t poisoned by the requirement to produce obscene and ever-increasing profits to fill the overflowing coffers of a narrow segment of the population.”

Alas, as well written as this sermon is, Marx has no manifesto for how to knock this house right down.

He argues that “we don’t have to be locked into the digital dystopia Silicon Valley has created,” but can’t serve up an alternative except with a loose sketch.

As the web declines, Marx says, “we need to consider what a better alternative could look like and the political project it would fit within.”


It’s All Happening in the Cloud, Baby: New Camera-to-Cloud, MAM and Cloud Storage Workflows

NAB

article here

Camera-to-cloud workflows accelerate the creative process. By shrinking the capture-to-edit timeframe, editors can begin working on media instantly instead of waiting for hard drives or delayed file transfers.

Proxy generation of original camera files to H.264 and ProRes is one of the most used features of Studio Network Solutions’ EVO Suite, and with the latest updates this process is now faster using and also now supports the latest RED and ARRI cameras.

Storage and media management specialist EditShare has teamed with Atomos to bring camera to cloud workflows from the latter’s camera mounted monitor-recorders into its collaboration platform MediaSilo via the cloud. After pairing your Atomos device via HDMI or SDI to the Atomos Cloud and adding MediaSilo as your destination, users can upload proxy files as they record.

It’s the latest such integration with Atomos products. “We’ve always considered ourselves to be a neutral ‘gateway’ to a wide selection of secure destinations for our customers’ content,” said Atomos CEO Jeromy Young.

The grandfather of camera-to-cloud is Frame.io, which was first released in 2015 and is now being revamped by its new owners, Adobe. The fourth version of the asset management software is “more than just an update; it signifies a complete transformation of the product, marking the beginning of a new chapter in how modern teams structure and manage their creative workflows,” says Frame.io co-founder and VP Emery Wells.

Metadata is apparently key to the new Frame.io v4 experience. “Instead of relying solely on a rigid folder structure, you can now organize and view your media based on how you and your team work in a single, unified platform,” explains Wells.

Frame.io has introduced a flexible, saved view of assets called Collections that allow users to select, filter, group, and sort media using metadata. “Collections update in real time, reducing the time your team spends manually culling and organizing,” the company says. “They also allow you to organize (or reorganize) your files in unique combinations without needing to make duplicates of your assets, which conserves storage space. Collections is our answer to providing the kinds of flexible workflows that you’ve long asked for, without us dictating the approach, process, or template for how you work.”

Blackmagic Design has expanded its camera-to-cloud workflow by enabling its latest cameras, the Pyxis and Ursa Cine 12K, to transfer proxy (compressed) media direct to the cloud. Its overall distributed collaboration concept for post-production relies on a piece of on-premise hardware. Announced in 2022, the Blackmagic Cloud Store now has a new Max model with capacity for either 24TB or 48TB, the former costing $6,495. This increased capacity of network storage is designed to work with the sizeable files of the 12K Ursa.

One upside is media sync with DaVinci Resolve, meaning that the moment a film crew starts shooting, the camera media will sync within seconds so the post production team can start working.

According to CEO Grant Petty, Blackmagic Cloud Store is designed to handle the large media files used in film and television where multiple editors, colorists, audio engineers and VFX artists all work on the media at the same time. “It even handles massive 12K Blackmagic RAW digital film files,” he says. “Each user gets zero latency and they don’t need to store files on their local computer. That’s perfect for DaVinci Resolve.”

Users can install a local cache of media uploaded either to the Blackmagic Cloud website or services like Dropbox and Google Drive. BMD says this makes working faster because files are distributed globally to as many sites as customers need.

Cloud MAM and Cloud Storage

Media organizations are operating under tighter deadlines and narrower profit margins and are looking for ways to speed production workflows while controlling costs. This means tools for migrating to cloud that can manage costs between tiers of “hot” and “cold” storage, as well as between cloud and on-premise stores, are highly in demand.

EVO Suite from Studio Networks Solutions is a media asset management tool for remote collaborative working. The latest updates enable users to sync, replicate, and back up media from EVO to destinations that include NAS servers on-prem in a facility, FTP and SFTP sites, and a number of cloud storage platforms, including Box.com, Wasabi, Backblaze, Google, AWS and Azure.

A new bandwidth throttling function can control how much of EVO’s processing power is dedicated to automation (transcodes for example) and how much resource to make available to editors working concurrently on projects. A ShareBrowser integrates EVO Suite media management directly into Adobe Premiere Pro and DaVinci Resolve.

“So, when a producer wants to call out a sub-clip for the highlight reel, or leave a comment at a specific timecode marker, those details appear directly in the editor’s timeline in Resolve and Premiere,” the company says.

Sony has a bewildering array of cloud related services for live streams, production and post. The company describes Networked Live as an ecosystem to “enable production resources to be optimally connected, used, and shared” to facilitate remote production through on-premise and cloud solutions. It also markets a cloud gateway service called C3 Portal which can onboard live feeds from cellular bonding links via Teradek, LiveU, and TVU Networks and in combination with Dejero and Haivision.

Sony further offers Creators’ Cloud, which comprises a number of cloud-based platforms and apps including a new Multi-Cam monitoring function, as well as media management service Ci Media Cloud. A new integration with Marquis’s Medway provides automated ingest from Ci into Avid systems. Ci also has a new workflow to support automated VFX pulls.

Cloud object storage platform Storj and storage and file management developer Amove have joined forces to offer media customers a route from on-premise into hybrid and full cloud environments.

Amove provides a desktop drive that offers instant access to any cloud storage provider (AWS, Azure, Wasabi and 30 other providers are mentioned) into Storj. The Amove Drive allows users to mount their storage buckets directly from the desktop, “providing a true multi-cloud management tool that delivers immediate access to the largest files from any cloud or on-premise storage,” according to the companies.

Features include syncs between providers, file sharing, cloud to cloud migrations, backups, and AI powered deduplication. Patrick Kennedy, Amove CEO stated, “After years of development and testing over 45 services, we chose Storj as the ideal partner to deliver our users instant capacity from Amove Drives with incredible speed, cost efficiency and performance within an innovative architecture that supports remote streaming and access from anywhere.”

Cloud storage specialist Backblaze is opening up its technology as a white label to third-party vendors and other companies.

As CEO Gleb Budman explained, “Backblaze offers companies the ability to deliver the value of our cloud to their customers without the complexity of building their own high performance infrastructure. We are happy to take care of that part so that businesses can easily expand their platforms with affordable, reliable data storage.”

There are two ways customers can do this. Custom Domains lets businesses serve content to end-users from the web domain or URL of their choosing, “with no need for complex code,” and with Backblaze managing the heavy lifting of cloud storage on the back end.

Software developer Azion has chosen to go this route, with CEO Rafael Umann saying, “We can implement the security needed to serve data from Backblaze to end users from Azion’s Edge Platform, improving user experience.”

Organizations can also use an API to provision cloud storage accounts from Backblaze from within their own platform.

“Our customers produce thousands of hours of content daily and they need a place to store both their original and transcoded files,” says Murad Mordukhay, CEO at cloud video solutions provider Qencode. “The Backblaze API allows us to expand our cloud services and eliminate complexity for our customers — giving them time to focus on their business needs, while we focus on innovations that drive more value.”

Backblaze published an in-depth explanation of the features on its blog.

Wasabi AiR applies AI-driven metadata, auto-tagging and multilingual speech-to-text transcription to cloud media storage. This is the result of the company’s acquisition in January of Curio AI. Video files uploaded to Wasabi AiR are immediately analyzed and compiled into a searchable metadata index.

“Why move to the cloud if you still can’t find anything?” said Wasabi co-founder and CEO David Friend. “Object storage without metadata is like a library without a catalog. Wasabi AiR works right out of the box and it’s as simple to use as popular search engines. For example, if it finds a face that it doesn’t recognize, it asks ‘Who is this?’ Using a simple UI, the user can train their own models. You can have tens of thousands of hours of video, and Wasabi AiR will take you right to the moment you are looking for.”

Wasabi claims this product “greatly reduces” the cost of metadata creation since customers pay only for the storage with no additional charge for use of the AI.

Dave McCarthy, research VP at analyst IDC, said, “Wasabi AiR represents a significant advancement in tackling the longstanding issue of managing extensive data archives, within a substantial market for intelligent media storage solutions.”

Akamai is now using NVIDIA GPUs to beef the encoding capabilities of its cloud-based service. The new GPUs are said to be 25x faster than traditional CPU-based encoding and transcoding methods, “which presents a significant advancement in the way streaming service providers address their typical workload challenges.” Use cases outlined by Akamai include transcoding live video streams, rendering 3D graphics for VR and AR content, and for training and inferencing generative AI.

If you broadcast, stream, or distribute live video in the cloud, chances are you’ve spent time building, testing, and securing your workflows. AWS has a new workflow monitor making it easier to do this while running AWS cloud services.

It displays the relationships between resources in a graphical signal map, so you can see which resources are in use and how they are connected.

AWS product marketing manager Dan Gehred says, “Once signal maps are created, you use the workflow monitor to create and apply alarm and notification templates to alert you when issues arise.”