Go to Source
Author: Matt Simon
Don’t Fear the Robot Overlords—Embrace Them as Coworkers
Go to Source
Author: Matt Simon
NASA’s New Horizons Probe Prepares To Make History—Again
Go to Source
Author: Robbie Gonzalez
The Most-Read WIRED Science Stories of 2018
Go to Source
Author: Andrea Valdez
This wristband detects an opiate overdose
A project by students at Carnegie Mellon could save lives. Called the HopeBand, the wristband senses low blood oxygen levels and sends a text message and sounds an alarm if danger is imminent.
“Imagine having a friend who is always watching for signs of overdose; someone who understands your usage pattern and knows when to contact [someone] for help and make sure you get help,” student Rashmi Kalkunte told IEEE. “That’s what the HopeBand is designed to do.”
The team won third place in the Robert Wood Johnson Foundation’s Opioid Challenge at the Health 2.0 conference in September and they are planning to send the band to a needle exchange program in Pittsburgh. They hope to sell it for less than $20.
Given the more than 72,000 overdose deaths in America this year, a device like this could definitely keep folks a little safer.
Go to Source
Author: John Biggs
Put down your phone if you want to innovate
We are living in an interstitial period. In the early 1980s we entered an era of desktop computing that culminated in the dot-com crash — a financial bubble that we bolstered with Y2K consulting fees and hardware expenditures alongside irrational exuberance over Pets.com . That last interstitial era, an era during which computers got smaller, weirder, thinner and more powerful, ushered us, after a long period of boredom, into the mobile era in which we now exist. If you want to help innovate in the next decade, it’s time to admit that phones, like desktop PCs before them, are a dead-end.
We create and then brush up against the edges of our creation every decade. The speed at which we improve — but not innovate — is increasing, and so the difference between a 2007 iPhone and a modern Pixel 3 is incredible. But what can the Pixel do that the original iPhone or Android phones can’t? Not much.
We are limited by the use cases afforded by our current technology. In 1903, a bike was a bike and could not fly. Until the Wright Brothers and others turned forward mechanical motion into lift were we able to lift off. In 2019 a phone is a phone and cannot truly interact with us as long as it remains a separate part of our bodies. Until someone looks beyond these limitations will we be able to take flight.
While I won’t posit on the future of mobile tech, I will note that until we put our phones away and look at the world anew we will do nothing of note. We can take better photos and FaceTime each other, but until we see the limitations of these technologies we will be unable to see a world outside of them.
We’re heading into a new year (and a new CES) and we can expect more of the same. It is safe and comfortable to remain in the screen-hand-eye nexus, creating VR devices that are essentially phones slapped to our faces and big computers that now masquerade as TVs. What, however, is the next step? Where do these devices go? How do they change? How do user interfaces compress and morph? Until we actively think about this we will remain stuck.
Perhaps you are. You’d better hurry. If this period ends as swiftly and decisively as the other ones before it, the opportunity available will be limited at best. Why hasn’t VR taken off? Because it is still on the fringes, being explored by people stuck in mobile thinking. Why is machine learning and AI so slow? Because the use cases are aimed at chatbots and better customer interaction. Until we start looking beyond the black mirror (see what I did?) of our phones, innovation will fail.
Every app launched, every pictured scrolled, every tap, every hunched-over moment davening to some dumb Facebook improvement is a brick in the bulwark against an unexpected and better future. So put your phone down this year and build something. Soon it might be too late.
Go to Source
Author: John Biggs
The Very Slow Movie Player shows a film over an entire year
It seems someone took Every Frame a Painting literally: The Very Slow Movie Player is a device that turns cinema into wallpaper, advancing the image by a single second every hour. The result is an interesting household object that makes something new of even the most familiar film.
The idea occurred to designer and engineer Bryan Boyer during one of those times we all have where we are sitting at home thinking of ways to celebrate slowness.
“Can a film be consumed at the speed of reading a book?” he asked himself, slowly. “Slowing things down to an extreme measure creates room for appreciation of the object… but the prolonged duration also starts to shift the relationship between object, viewer, and context. A film watched at 1/3,600th of the original speed is not a very slow movie, it’s a hazy timepiece. A Very Slow Movie Player (VSMP) doesn’t tell you the time; it helps you see yourself against the smear of time.”
The Very Slow Movie Player is an e-paper display attached to a Raspberry Pi board; you load a movie onto the latter, and it processes and displays a single frame at a time, updating the screen with a new one every two and a half minutes.
That adds up to 24 frames per hour, as opposed to the usual 24 frames per second — 3,600 times slower than normal viewing, and producing a (perhaps) 7-or-8,000-hour tableau you view over the course of a year or so.
“It is impossible to ‘watch’ in a traditional way because it’s too slow. In a staring contest with VSMP you will always lose,” writes Boyer in a post explaining the project. “It can be noticed, glanced at, or even inspected, but not watched.”
He compares it to the work of Bill Viola, whose super-slow-motion portraits are similarly impossible to watch from start to finish (unless you’re very, very patient) and therefore exist in a sort of limbo between motion picture and still image.
The image itself leaves something to be desired, of course: e-paper is essentially 1-bit color depth — black and white. So the subtleties of color you might see in any film, color or no, will be lost to dithering.
The way it’s done helps highlight the contrasts and zones of a scene, though if you really want to appreciate Rear Window as cinema, you can watch it any time you like. But if you want to appreciate it as a process, as a relationship with time, as an object and image that exists in the context of the rest of the world and your life… for that, you have the Very Slow Movie Player.
Go to Source
Author: Devin Coldewey
Iota Biosciences raises $15M to produce in-body sensors smaller than a grain of rice
Fitness trackers and heart-rate monitors are all well and good, but if you want to track activity inside the body, the solutions aren’t nearly as convenient. Iota Biosciences wants to change that with millimeter-wide sensors that can live more or less permanently in your body and transmit wirelessly what they detect, and a $15 million Series A should put them well on their way.
The team emerged from research at UC Berkeley, where co-founders Jose Carmena and Michel Maharbiz were working on improving the state of microelectrodes. These devices are used all over medical and experimental science to monitor and stimulate nerves and muscle tissues. For instance, a microelectrode array in the brain might be able to help detect early signs of a seizure, and around the heart one could precisely test the rhythms of cardiac tissues.
But despite their name, microelectrodes aren’t really small. The tips, sure, but they’re often connected to larger machines, or battery-powered packs, and they can rarely stay in the body for more than a few weeks or months due to various complications associated with them.
Considering how far we’ve come in other sectors when it comes to miniaturization, manufacturing techniques and power efficiency, Carmena and Maharbiz thought, why don’t we have something better?
“The idea at first was to have free-floating motes in the brain with RF [radio frequency] powering them,” Carmena said. But they ran into a fundamental problem: RF radiation, because of its long wavelength, requires rather a large antenna to receive them. Much larger than was practical for devices meant to swim in the bloodstream.
“There was a meeting at which everything died, because we were like two orders of magnitude away from what we needed. The physics just weren’t there,” he recalled. “So were like, ‘I guess that’s it!’ ”
But some time after, Maharbiz had a “eureka” moment — “as weird as it sounds, it occurred to me in a parking lot. You just think about it and all these things align.”
His revelation: ultrasound.
Power at the speed of sound
You’re probably familiar with ultrasound as a diagnostic tool, for imaging inside the body during pregnancy and the like — or possibly as a range-finding tool that “pings” nearby objects. There’s been a lot of focus on the venerable technology recently as technologists have found new applications for it.
In fact, a portable ultrasound company just won TechCrunch’s Startup Battlefield in Lagos:
Iota’s approach, however, has little to do with these traditional uses of the technology. Remember the principle that you have to have an antenna that’s a reasonable fraction of an emission’s wavelength in order to capture it? Well, ultrasound has a wavelength measured in microns — millionths of a meter.
So it can be captured — and captured very efficiently. That means an ultrasound antenna can easily catch enough waves to power a connected device.
Not only that, but as you might guess from its use in imaging, ultrasound goes right through us. Lots of radiation, including RF, gets absorbed by the charged, salty water that makes up much of the human body.
“Ultrasound doesn’t do that,” Maharbiz said. “You’re just Jell-O — it goes right through you.”
The device they put together to take advantage of this is remarkably simple, and incredibly tiny. On one side is what’s called a piezoelectric crystal, something that transforms force — in this case, ultrasound — into electricity. In the middle is a tiny chip, and around the edge runs a set of electrodes.
It’s so small that it can be attached to a single nerve or muscle fiber. When the device is activated by a beam of ultrasound, voltage runs between the electrodes, and this minute current is affected by the electrical activity of the tissue. These slight changes are literally reflected in how the ultrasonic pulses bounce back, and the reader can derive electrophysiological voltage from those changes.
Basically the waves they send power the device and bounce back slightly changed, depending on what the nerve or muscle is doing. By sending a steady stream of pulses, the system collects a constant stream of precise monitoring data simply and non-invasively. (And yes, this has been demonstrated in vivo.)
Contained inside non-reactive, implant-safe containers, these microscopic “motes” could be installed singly or by the dozen, doing everything from monitoring heart tissue to controlling a prosthesis. And because they can also deliver a voltage, they could conceivably be used for therapeutic purposes, as well.
And to be clear, those purposes won’t be inside the brain. Although there’s no particular reason this tech wouldn’t work in the central nervous system, it would have to be smaller and testing would be much more complicated. The initial applications will all be in the peripheral nervous system.
At any rate, before any of that happens, they have to be approved by the FDA.
The long medtech road
As you might guess, this isn’t the kind of thing you can just invent and then start implanting all over the place. Implants, especially electronic ones, must undergo extreme scrutiny before being allowed to be used in even experimental treatment.
Fortunately for Iota, their devices have a lot of advantages over, say, a pacemaker with a radio-based data connection and five-year battery. The only transmission involved is ultrasound, for one thing, and there are decades of studies showing the safety of using it.
“The FDA has well-defined limits for average and peak powers for the human body with ultrasound, and we’re nowhere near those frequencies or powers. This is very different,” explained Maharbiz. “There’s no exotic materials or techniques. As far as constant low-level ultrasound goes, the notion really is that it does nothing.”
And unlike a major device like a medication port, pump, stint, pacemaker or even a long-term electrode, “installation” is straightforward and easily reversible.
It would be done laparoscopically, or through a tiny incision. said Carmena. “If it has to be taken out, it can be taken out, but it’s so minimally invasive and small and safe that we keep it,” he said.
These are all marks in Iota’s favor, but testing can’t be rushed. Although the groundwork for their devices was laid in 2013, the team has taken a great deal of time to advance the science to the point where it can be taken out of the lab to begin with.
In order to get it now to the point where they can propose human trials, Iota has raised $15 million in funding; the round was led by Horizons Ventures, Astellas, Bold Capital Partners, Ironfire and Shanda. (The round was in May but only just announced.)
The A round should get the company from its current prototype phase to a point, perhaps some 18 months distant, when they have a production-ready version ready to present to the FDA — at which point more funding will probably be required to get through the subsequent years of testing.
But that’s the game in medtech, and all the investors know it. This could be a hugely disruptive technology in a number of fields, although at first the devices need to be approved for a single medical purpose (one Iota has decided on but can’t disclose yet).
It’s a long road, all right, but at the end of it is the fulfillment of a promise straight out of sci-fi. It may be years before you have microscopic, ultrasound-powered doodads swimming around inside you, but that future is well on its way.
Go to Source
Author: Devin Coldewey
Watch the ANYmal quadrupedal robot go for an adventure in the sewers of Zurich
There’s a lot of talk about the many potential uses of multi-legged robots like Cheetahbot and Spot — but in order for those to come to fruition, the robots actually have to go out and do stuff. And to train for a glorious future of sewer inspection (and helping rescue people, probably), this Swiss quadrupedal bot is going deep underground.
The robot is called ANYmal, and it’s a long-term collaboration between the Swiss Federal Institute of Technology, abbreviated there as ETH Zurich, and a spin-off from the university called ANYbotics. Its latest escapade was a trip to the sewers below that city, where it could eventually aid or replace the manual inspection process.
ANYmal isn’t brand new — like most robot platforms, it’s been under constant revision for years. But it’s only recently that cameras and sensors like lidar have gotten good enough and small enough that real-world testing in a dark, slimy place like sewer pipes could be considered.
Most cities have miles and miles of underground infrastructure that can only be checked by expert inspectors. This is dangerous and tedious work — perfect for automation. Imagine instead of yearly inspections by people, if robots were swinging by once a week. If anything looks off, it calls in the humans. It could also enter areas rendered inaccessible by disasters or simply too small for people to navigate safely.
But of course, before an army of robots can inhabit our sewers (where have I encountered this concept before? Oh yeah…) the robot needs to experience and learn about that environment. First outings will be only minimally autonomous, with more independence added as the robot and team gain confidence.
“Just because something works in the lab doesn’t always mean it will in the real world,” explained ANYbotics co-founder Péter Fankhauser in the ETHZ story.
Testing the robot’s sensors and skills in a real-world scenario provides new insights and tons of data for the engineers to work with. For instance, when the environment is completely dark, laser-based imaging may work, but what if there’s a lot of water, steam or smoke? ANYmal should also be able to feel its surroundings, its creators decided.
So they tested both sensor-equipped feet (with mixed success) and the possibility of ANYmal raising its “paw” to touch a wall, to find a button or determine temperature or texture. This latter action had to be manually improvised by the pilots, but clearly it’s something it should be able to do on its own. Add it to the list!
You can watch “Inspector ANYmal’s” trip below Zurich in the video below.
Go to Source
Author: Devin Coldewey
Researchers are putting fish into augmented reality tanks
Researchers at the New Jersey Institute of Technology, while testing the “station keeping” functions of the glass knifefish, have created an augmented reality system that tricks the animal’s electric sensing organs in real time. The fish keeps itself hidden by moving inside of its various holes/homes and the researchers wanted to understand what kind of autonomous sensing functions it used to keep itself safe.
“What is most exciting is that this study has allowed us to explore feedback in ways that we have been dreaming about for over 10 years,” said Eric Fortune, associate professor at NJIT. “This is perhaps the first study where augmented reality has been used to probe, in real time, this fundamental process of movement-based active sensing, which nearly all animals use to perceive the environment around them.”
The fish isn’t wearing a headset, but instead the researchers have simulated the motion of a refuge waving in the water.
“We’ve known for a long time that these fish will follow the position of their refuge, but more recently we discovered that they generate small movements that reminded us of the tiny movements that are seen in human eyes,” said Fortune. “That led us to devise our augmented reality system and see if we could experimentally perturb the relationship between the sensory and motor systems of these fish without completely unlinking them. Until now, this was very hard to do.”
To create their test they put a fish inside a tube and synced the motion of the tube to the fish’s eyes. As the fish swam forward and backward, the researchers would watch to see what happened when the fish could see that it was directly effecting the motion of the refuge. When they synced the refuge to the motion of the fish, they were able to confirm that the fish could tell that the experience wasn’t “real” in a natural sense. In short, the fish knew it was in a virtual environment.
“It turns out the fish behave differently when the stimulus is controlled by the individual versus when the stimulus is played back to them,” said Fortune. “This experiment demonstrates that the phenomenon that we are observing is due to feedback the fish receives from its own movement. Essentially, the animal seems to know that it is controlling the sensory world around it.”
Whether or not the fish can play Job Simulator is still unclear.
“Our hope is that researchers will conduct similar experiments to learn more about vision in humans, which could give us valuable knowledge about our own neurobiology,” said Fortune. “At the same time, because animals continue to be so much better at vision and control of movement than any artificial system that has been devised, we think that engineers could take the data we’ve published and translate that into more powerful feedback control systems.”
Go to Source
Author: John Biggs





