Amazon’s Fire TV Stick 4K Max drops back down to an all-time low of $35

Amazon’s most powerful streaming stick is on sale yet again for Amazon’s second Prime Day sale in 2022. You can grab the Fire TV Stick 4K Max for $35, or $20 off its regular price. That’s how much it went for at this year’s first Prime Day event back in July, and it’s also the lowest price we’ve seen for the device on the website. The Fire TV Stick 4K Max supports Dolby Vision, HDR and HDR10+ content, as well as Dolby Atmos audio. It can also join WiFi 6 networks, and Amazon says it can start apps faster and has more fluid navigation than the basic Fire TV Stick 4K.

Buy Fire TV Stick 4K Max at Amazon – $35

Like other models, this one comes with a remote control that has preset buttons for Netflix, Prime Video, Disney+ and Hulu. Said remote is also powered by Alexa and can search content and launch them with just voice commands. You can even ask Alexa through the remote to dim your connected lights or check the weather. And if you have a compatible doorbell or security camera around your home, you can use its picture-in-picture capability to view its live feed on your screen without having to pause or remove whatever it is you’re watching. 

Out of all the Fire TV streaming devices, only the Cube set-top box is more powerful than the 4K Max. The Fire TV Cube is also on sale for $60 at the moment, or half off its original price. But if you want something cheaper, you can also get the non-Max Fire TV Stick 4K for $25 or the base Fire TV Stick for $20.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

Hitting the Books: What the wearables of tomorrow might look like

Apple’s Watch Ultra, with its 2000-nit digital display and GPS capabilities, is a far cry from its Revolutionary War-era self-winding forebears. What sorts of wondrous body-mounted technologies might we see another hundred years hence? In his new book, The Skeptic’s Guide to the Future, Dr. Steven Novella (with assists from his brothers, Bob and Jay Novella) examines the history of wearables and the technologies that enable them to extrapolate where further advances in flexible circuitry, wireless connectivity and thermoelectric power generation might lead.

Skeptic's Guide to the Future Cover
Grand Central Publishing

Excerpted from the book The Skeptics’ Guide to the Future: What Yesterday’s Science and Science Fiction Tell Us About the World of Tomorrow by Dr. Steven Novella, with Bob Novella and Jay Novella. Copyright © 2022 by SGU Productions, Inc. Reprinted with permission of Grand Central Publishing. All rights reserved. 


Technology that Enables Wearables

As the name implies, wearable technology is simply technology designed to be worn, so it will advance as technology in general advances. For example, as timekeeping technology progressed, so did the wristwatch, leading to the smartwatches of today. There are certain advances that lend themselves particularly to wearable technology. One such development is miniaturization.

The ability to make technology smaller is a general trend that benefits wearables by extending the number of technologies that are small enough to be conveniently and comfortably worn. We are all familiar by now with the incredible miniaturization in the electronics industry, and especially in computer chip technology. Postage-stamp-sized chips are now more powerful than computers that would have filled entire rooms in prior decades.

As is evidenced by the high-quality cameras on a typical smartphone, optical technology has already significantly miniaturized. There is ongoing research into tinier optics still, using metamaterials to produce telephoto and zoom lenses without the need for bulky glass.

“Nanotechnology” is now a collective buzzword for machines that are built at the microscopic scale (although technically it is much smaller still), and of course, nanotech will have incredible implications for wearables.

We are also at the dawn of flexible electronics, also called “flex circuits” and more collectively “flex tech.” This involves printing circuits onto a flexible plastic substrate, allowing for softer technology that moves as we move. Flexible technology can more easily be incorporated into clothing, even woven into its fabric. The advent of two-dimensional materials, like carbon nanotubes, which can form the basis of electronics and circuits, are also highly flexible. Organic circuits are yet another technology that allows for the circuits to be made of flexible material, rather than just printed on flexible material.

Circuits can also be directly printed onto the skin, as a tattoo, using conductive inks that can act as sensors. One company, Tech Tats, already offers one such tattoo for medical monitoring purposes. The ink is printed in the upper layers of the skin, so they are not permanent. They can monitor things like heart rate and communicate this information wirelessly to a smartphone.

Wearable electronics have to be powered. Small watch batteries already exist, but they have finite energy. Luckily there are a host of technologies being developed that can harvest small amounts of energy from the environment to power wearables (in addition to implantable devices and other small electronics). Perhaps the earliest example of this was the self-winding watch, the first evidence of which comes from 1776. Swiss watchmaker Abraham-Louis Perrelet developed a pocket watch with a pendulum that would wind the watch from the movement of normal walking. Reportedly it took about fifteen minutes of walking to be fully wound.

There are also ways to generate electric power that are not just mechanical power. Four types of ambient energy exist in the environment—mechanical, thermal, radiant (e.g., sunlight), and chemical. Piezoelectric technology, for example, converts applied mechanical strain into electrical current. The mechanical force can come from the impact of your foot hitting the ground, or just from moving your limbs or even breathing. Quartz and bone are piezoelectric materials, but it can also be manufactured as barium titanate and lead zirconate titanate. Electrostatic and electromagnetic devices harvest mechanical energy in the form of vibrations.

There are thermoelectric generators that can produce electricity from differences in temperature. As humans are warm-blooded mammals, a significant amount of electricity can be created from the waste heat we constantly shed. There are also thermoelectric generators that are made from flexible material, combining flex tech with energy harvesting. This technology is mostly in the prototype phase right now. For example, in 2021, engineers published the development of a flexible thermoelectric generator made from an aerogel-silicone composite with embedded liquid metal conductors resulting in a flexible that could be worn on the wrist and could generate enough electricity to power a small device.

Ambient radiant energy in the form of sunlight can be converted to electricity through the photoelectric effect. This is the basis of solar panels, but small and flexible solar panels can be incorporated into wearable devices as well.

All of these energy-harvesting technologies can also double as sensing technology—they can sense heat, light, vibration, or mechanical strain and produce a signal in response. Tiny self-powered sensors can therefore be ubiquitous in our technology.

The Future of Wearable Tech

The technology already exists, or is on the cusp, to have small, flexible, self-powered, and durable electronic devices and sensors, incorporated with wireless technology and advanced miniaturized digital technology. We therefore can convert existing tools and devices into wearable versions, or use them to explore new options for wearable tech. We also can increasingly incorporate digital technology into our clothing, jewelry, and wearable equipment. This means that wearable tech will likely increasingly shift from passive objects to active technology integrated into the rest of our digital lives.

There are some obvious applications here, even though it is difficult to predict what people will find useful versus annoying or simply useless. Smartphones have already become smartwatches, or they can pair together for extended functionality. Google Glass is an early attempt at incorporating computer technology into wearable glasses, and we know how it has been received.

If we extrapolate this technology, one manifestation is that the clothing and gear we already wear can be converted into electronic devices we already use, or they can be enhanced with new functionality that replaces or supports existing devices.

We may, for example, continue to use a smartphone as the hub of our portable electronics. Perhaps that smartphone will be connected not only to wireless earbuds as they are now, but also to a wireless monitor built into glasses, or sensors that monitor health vitals or daily activity. Potentially, the phone could communicate with any device on the planet, so it could automatically contact your doctor’s office regarding any concerning changes, or contact emergency services if appropriate.

Portable cameras could also monitor and record the environment, not just for documenting purposes but also to direct people to desired locations or services, or contact the police if a crime or disaster is in progress.

As our appliances increasingly become part of the “internet of things,” we too will become part of that internet through what we wear, or what’s printed on or implanted beneath our skin. We might, in a very real sense, become part of our home, office, workplace, or car, as one integrated technological whole.

We’ve mostly been considering day-to-day life, but there will also be wearable tech for special occupations and situations. An extreme version of this is exosuits for industrial or military applications. Think Iron Man, although that level of tech is currently fantasy. There is no portable power source that can match Iron Man’s arc reactor, and there doesn’t appear to be any place to store the massive amounts of propellant necessary to fly as he does.

More realistic versions of industrial exosuits are already a reality and will only get better. A better sci-fi analogy might be the loader exo-suit worn by Ripley in Aliens. Powered metal exosuits for construction workers have been in development for decades. The earliest example is the Hardiman, developed by General Electric between 1965 and 1971. That project essentially failed and the Hardiman was never used, but since then development has continued. Applications have mostly been medical, such as helping people with paralysis walk. Industrial uses are still minimal and do not yet include whole-body suits. However, such suits can theoretically greatly enhance the strength of workers, allowing them to carry heavy loads. They could also incorporate tools they would normally use, such as rivet guns and welders.

Military applications for powered exosuits would likely include armor, visual aids such as infrared or night-vision goggles, weapons and targeting systems, and communications. Such exosuits could turn a single soldier into not just enhanced infantry, but also a tank, artillery, communications, medic, and mule for supplies.

Military development might also push technology for built-in emergency medical protocols. A suit could automatically apply pressure to a wound to reduce bleeding. There are already pressure pants that prevent shock by helping to maintain blood pressure. More ambitious tech could automatically inject drugs to counteract chemical warfare, increase blood pressure, reduce pain, or prevent infection. These could be controlled by either onboard AI or remotely by a battlefield medic who is monitoring the soldiers under their watch and taking actions remotely through their suits.

Once this kind of technology matures, it can then trickle down to civilian applications. Someone with life-threatening allergies could carry epinephrine on them to be injected, or they could wear an autoinjector that will dose them as necessary, or be remotely triggered by an emergency medical responder.

Everything discussed so far is an extrapolation from existing technology, and these more mature applications are feasible within fifty years or so. What about the far future? This is likely where nanotechnology comes in. Imagine wearing a nanosuit that fits like a second skin but that is made from programmable and reconfigurable material. It can form any mundane physical object you might need, on command. Essentially, the suit would be every tool ever made.

You could also change your fashion on demand. Go from casual in the morning to business casual for a meeting and then formal for a dinner party without ever changing your clothes. Beyond mere fashion, this could be programmable cosplay—do you want to be a pirate, or a werewolf? More practically, such a nanoskin could be well ventilated when it’s warm and then puff out for good insulation when it’s cold. In fact, it could automatically adjust your skin temperature for maximal comfort.

Such material can be soft and comfortable, but bunch up and become hard when it encounters force, essentially functioning as highly effective armor. If you are injured, it could stem bleeding, maintain pressure, even do chest compressions if necessary. In fact, once such a second skin becomes widely adopted, life without it may quickly become unimaginable and scary.

Wearable technology may become the ultimate in small or portable technology because of the convenience and effectiveness of being able to carry it around with us. As shown, many of the technologies we are discussing might converge on wearable technology, which is a good reminder that when we try to imagine the future, we cannot simply extrapolate one technology but must consider how all technology will interact. We may be making our wearables out of 2D materials, powered by AI and robotic technology, with a brain-machine interface that we use for virtual reality. We may also be creating customized wearables with additive manufacturing, using our home 3D printer.

Recommended Reading: Behind the wheel of the 2023 Mercedes-Benz EQS SUV

2023 Mercedes-Benz EQS SUV first drive: Better because it’s bigger?

John Beltz Snyder, Autoblog

Our colleagues at Autoblog have some in-depth analysis of the 2023 Mercedes-Benz EQS SUV via Snyder’s first drive experience. While it’s similar to the EQS sedan, Snyder argues the SUV variant will likely be more popular. 

Your smart thermostat isn’t here to help you

Ian Bogost, The Atlantic

A recent study found that smart thermostats don’t really save you money because you’re more likely to use the convenience of quick adjustments on your phone. So why are energy providers subsidizing them for customers? They’re gathering that sweet data and maybe even throttling your power consumption (with permission). Bogost argues that convenience is still worth it, especially when you don’t have to get out of bed to make yourself comfy. 

America’s throwaway spies

Joel Schectman and Bozorgmehr Sharafedin, Reuters

This in-depth report examines how the US intelligence failed its informants in Iran while it fought a covert war with Tehran. “A faulty CIA covert communications system” made it easy for Iranian officials to find sources, even if they had been otherwise careful about their work. 

Tesla debuts an actual, mechanical prototype of its Optimus robot

It seems like just yesterday that Elon Musk ushered a person in a spandex suit onto the Tesla AI Day 2021 stage and told us it was a robot — or at least would probably be one eventually. In the intervening 13 months, the company has apparently been hard at work, replacing the squishy bits from what crowd saw on stage with proper electronics and mechanizations. At this year’s AI Day on Friday, Tesla unveiled the next iteration of its Optimus robotics platform and, well, at least there isn’t still a person on the inside? 

tesla bot
Tesla

Tesla CEO Elon Musk debuted the “first” Optimus (again, skinny guy in a leotard, not an actual machine) in August of last year and, true to his nature, proceeded to set out a series of increasingly incredible claims about the platform’s future capabilities — just like how the Cybertruck will have unbreakable windows. As Musk explained at the time, the Optimus will operate an AI similar to the company’s Autopilot system (the one that keeps chasing stationary ambulances) and be capable of working safely around humans without extensive prior training. 

Additionally, the Tesla Bot would understand complex verbal commands, Musk assured the assembled crowd, it would have “human-level hands,” be able to both move at 5 MPH and carry up to 45 pounds despite standing under six feet tall and weighing 125 pounds. And, most incredibly, Tesla would have a working prototype for all of that by 2022, which brings us to today.

production  tesla bot
Tesla

Kicking off the event, CEO Elon Musk was joined almost immediately on stage by an early development platform prototype of the robot — the very first time one of the test units had walked unassisted by an umbilical tether. Lacking any exterior panelling to reveal the Tesla-designed actuators inside, the robot moved at a halting and ponderous pace, not unlike early Asimos and certainly a far cry from the deft acrobatics that Boston Robotics’ Atlas exhibits.

Tesla Bot
Tesla

The Tesla team also rolled out a further developed, but still tethered iteration as well, pictured above. “it wasn’t quite ready to walk,” Musk said, “but I think we’ll walk in a few weeks. We wanted to show you the robot that’s actually really close to what is going to production.” 

Tesla Bot
Tesla

“Our goal is to make a useful humanoid robot as quickly as possible,” Musk said. “And we’ve also designed it using the same discipline that we use in designing the car, which is to say… to make the robot at an high volume at low cost with higher reliability.” He estimates that they could cost under $20,000 when built at volume. 

The Optimus will be equipped with a 2.3 kWh battery pack which integrates the various power control systems into a single PCB. That should be sufficient to get the robot through a full day of work, per Tesla’s engineering team which joined Musk on stage during the event. 

Tesla Bot
Tesla

“Humans are also pretty efficient at somethings but not so efficient at other times,” Lizzie Miskovetz, a Senior Mechanical Design Engineer at Tesla, and a member of the engineering team explained. While humans can sustain themselves on small amounts of food, we cannot halt our metabolisms when not working. 

“On the robot platform, what we’re going to do is we’re going to minimize that. Idle power consumption, drop it as low as possible,” she continued. The team also plans to strip as much complexity and mass as possible from the robot’s arms and legs. “We’re going to reduce our part count and our power consumption of every element possible. We’re going to do things like reduce the sensing and the wiring at our extremities,” Miskovetz said. 

Tesla Bot
Tesla

What’s more, expensive and heavy materials will be swapped out with plastics that trade slight losses in stiffness with larger savings in weight. “We are carrying over most of our designing experience from the car to the robot,” Milan Kovac, Tesla’s Director of Autopilot Software Engineering said. 

To enable the Optimus to move about in real world situations, “We want to leverage both the autopilot hardware and the software for the humanoid platform, but because it’s different in requirements and inform factor,” Miskovetz said. “It’s going to do everything that a human brain does: processing vision data, making split-second decisions based on multiple sensory inputs and also communications,” thanks to integrated Wi-Fi and cellular radios.

“The human hand has the ability to move at 300 degrees per second, as tens of thousands of tactile sensors. It has the ability to grasp and manipulate almost every object in our daily lives,” Kovac said. “We were inspired by biology. [Optimus hands] have five fingers and opposable thumb. Our fingers are driven by metallic tendons that are both flexible and strong because the ability to complete wide aperture power grasps while also being optimized for precision, gripping of small, thin and delicate objects.” 

Tesla Bot
Tesla

Each hand will offer 11 degrees of freedom derived from its six dedicated actuators, as well as “complex mechanisms that allow the hand to adapt to the objects being grasped.” Kovac said. “We [also] have a non-backdrivable finger drive. This clutching mechanism allows us to hold and transport objects without having to turn on the hand motors.”

“We’re starting out having something that’s usable,” Kovac concluded, “but it’s far from being useful. It’s still a long and exciting road ahead of us.” Tesla engineering plans to get the enclosed, production iteration up and walking around without a tether in the next few weeks, then begin exploring more real-world applications and tangible use cases the Optimus might wind up in. 

“After seeing what we’ve shown tonight,” Kovac said. “I’m pretty sure we can get this done within the next few months or years and maybe make this product a reality and change the entire economy.”

Watch Tesla’s AI Day 2022 event at 9:15PM ET

Tesla is holding another AI Day, and it’ll be particularly easy to tune in. The automaker is streaming its 2022 event tonight at 9:15PM Eastern on YouTube (below) as well as its website. Elon Musk has warned the presentation will be “highly technical” and could last six hours, but you may have multiple reasons to watch even if you’re not fond of diagrams and in-depth explanations.

Notably, Musk said in June that Tesla pushed AI Day to September 30th in hopes of having a functional Optimus humanoid robot. It would just be a prototype, but it would show that the company’s vision of an autonomous helper exists beyond pretty 3D renders. The machine is meant to handle dangerous or monotonous tasks without requiring step-by-step instructions.

You could also see improvements to Tesla’s vehicle technology. The company’s Full Self-Driving feature is still rough, and Tesla might explain how it plans to refine the system. You could also see upgrades to Autopilot driver assistance. Behind the scenes, the company may expand the capabilities of the Dojo supercomputer it uses to train vision-based AI systems.

A Bruce Willis deepfake could appear in his stead for future film projects (updated)

Bruce Willis may have retired from acting following a diagnosis of aphasia, but a version of him will live on in future projects. Last year, the actor’s “digital twin” appeared in an ad for a Russian telecom created by a company called Deepcake. According to The Telegraph, his digital likeness may appear in future film, advertising and other projects. 

Deepcake told The Hollywood Reporter that, despite reports to the contrary, Willis has not sold his likeness rights to the company. Its involvement with the retired actor “was set up through his representatives at CAA,” according to the publication. A representative for the retired actor claimed that Willis “has no partnership or agreement with this Deepcake company.” 

Engineers created the digital double drawing from content in Die Hard and Fifth Element, when Willis was 32 and 42, respectively. With his likeness now on the company’s AI platform, it can graft his likeness onto another actor’s face in a relatively short amount of time. However, Willis’s estate has final approval on any projects. 

In the ad for Megafon, Willis’s face was swapped onto actor Konstantin Solovyov. “I liked the precision of my character. It’s a great opportunity for me to go back in time,” Willis said in a statement on Deepcake. “With the advent of the modern technology, I could communicate, work and participate in filming, even being on another continent. It’s a brand new and interesting experience for me, and I grateful to our team.”

In March, Willis’s family announced that he was retiring from acting to due a diagnosis of aphasia, which impairs communication and comprehension. In the last few years, the 67-year-old has appeared in a series of projects amid concern about his cognitive state.

Actors have already appeared as digital versions of themselves, notably in The Book of Boba Fett with a young Mark Hamill. Digital versions of Carrie Fisher and Peter Cushing also appeared in Star Wars: Rogue One, despite the fact that both are deceased. James Earl Jones recently sold Disney the right to recreate his voice using AI, so he could retire. 

The practice has stoked controversy. Deepfakes vary widely in quality, but many approach the “uncanny valley” where characters don’t look quite right because of stiff movements, dead eyes and other issues. There’s also the question of rights, as deceased actors can’t turn down posthumous film roles, even if the family or estate approves. 

Update 10/2 1:35PM ET: The Hollywood Reporter clarified that Willis did not sell his likeness rights to Deepcake. Rather, Deepcake says it “hired” a digital twin of the star. Willis or his estate will need to sign off on future use of his likeness.

Ubisoft will help jilted Stadia users transfer their purchases to PC

Stadia, Google’s ill-fated attempt at a cloud gaming service, will shut down in January. Players will be refunded for all their hardware and software purchases, except for Stadia Pro subscriptions. As it turns out, some folks will be able to keep playing certain games elsewhere. Ubisoft will help people who bought its titles on Stadia to transfer their purchases to PC.

“While Stadia will shut down on January 18th, 2023, we’re happy to share that we’re currently working to bring the games you own on Stadia to PC through Ubisoft Connect,” Ubisoft senior corporate communications manager Jessica Roache told The Verge. “We’ll have more to share regarding specific details as well as the impact for Ubisoft+ subscribers at a later date.” Google has already shut down the Stadia store, so if you were thinking of buying an Ubisoft game, getting a refund, then gaining access to the PC version for free, you’re out of luck.

Ubisoft hasn’t revealed when it will offer Stadia players access to their games on Ubisoft Connect. It also hasn’t confirmed whether Stadians will be able to transfer their save data over to PC. That said, the Ubisoft+ subscription service includes a cloud save feature, so hopefully the company can figure out a way to maintain players’ progress if they switch to a PC version.

While this is a nice gesture from Ubisoft, it might come as a small comfort to some of those who’ve been enjoying the likes of Assassin’s Creed Valhalla, Far Cry 6 and Rainbow Six Siege on Stadia. One of the big advantages of many cloud gaming services, including Stadia, is that they work on almost any computer, phone or tablet as long as you have a good internet connection. However, folks who don’t have a capable gaming PC might not be able to take advantage of this offer.

Ubisoft hasn’t been put off the idea of cloud gaming after the collapse of Stadia. Its Ubisoft+ channel is available on Amazon Luna, for one thing. “We believe in the power of streaming and cloud gaming and will continue to push the boundaries on bringing amazing experiences to our players, wherever they are,” Roache said. 

While Google has abandoned Stadia, it will still license the solid game-streaming tech to other companies through an initiative called Immersive Stream for Games. AT&T and Capcom have utilized the white-label version of the tech. Perhaps Ubisoft, whose Assassin’s Creed Odyssey was used in the first public test of what would become Stadia, will be interested in taking Google up on the offer too.

Magic Leap’s smaller, lighter second-gen AR glasses are now available

Magic Leap’s second take on augmented reality eyewear is available. The company has started selling Magic Leap 2 in 19 countries, including the US, UK and EU nations. The glasses are still aimed at developers and pros, but they include a number of design upgrades that make them considerably more practical — and point to where AR might be headed.

The design is 50 percent smaller and 20 percent lighter than the original. It should be more comfortable to wear over long periods, then. Magic Leap also promises better visibility for AR in bright light (think a well-lit office) thanks to “dynamic dimming” that makes virtual content appear more solid. Lens optics supposedly deliver higher quality imagery with easier-to-read text, and the company touts a wider field of view (70 degrees diagonal) than comparable wearables.

You can expect decent power that includes a quad-core AMD Zen 2-based processor in the “compute pack,” a 12.6MP camera (plus a host of cameras for depth, eye tracking and field-of-view) and 60FPS hand tracking for gestures. You’ll only get 3.5 hours of non-stop use, but the 256GB of storage (the most in any dedicated AR device, Magic Leap claims) provides room for more sophisticated apps.

As you might guess, this won’t be a casual purchase. The Magic Leap 2 Base model costs $3,299, while developers who want extra tools, enterprise features and early access for internal use will want to pay $4,099 for the Developer Pro edition. Corporate buyers will want to buy a $4,999 Enterprise model that includes regular, managed updates and two years of business features.

You won’t buy this for personal use as a result. This is more for healthcare, industry, retail and other spaces where the price could easily be offset by profits. However, it joins projects from Qualcomm, Google and others in showing where AR technology is going. Where early tech tended to be bulky and only ideal for a narrow set of circumstances, hardware like Magic Leap 2 appears to be considerably more usable in the real world.

You can now buy some YouTube TV add-ons without the $65 base plan

YouTube TV is now offering users the option to subscribe to standalone add-on channels without signing up for the platform’s base plan. You can choose from 20 channels, including HBO Max, Showtime and NBA League Pass. Epix and Starz, which will soon be rebranded in certain territories, are among the options as well. YouTube TV is following the likes of Apple TV, Amazon Prime Video, Roku and Sling TV in adding standalone channel subscriptions.

The cable-style YouTube TV base plan costs $65 and includes more than 85 channels (the full line up will vary slightly, depending on your location). But you’ll no longer need that to access MLB.TV, Cinemax et al through the service. Users who opt out of the base plan can still take advantage of YouTube TV features such as unlimited DVR space, up to six profiles per household and three simultaneous streams.

To some, it might seem unnecessary to sign up for standalone channels through services like YouTube TV when they have their own apps. There are some benefits though, especially if you subscribe to more than one. You’ll be able to access the services from a single app that might be available on more platforms than standalone apps for Shudder, Acorn and so on. Managing your subscriptions with a single bill may be useful too.