Monthly Archives: May 2017

This New Phone Using Small Strength, No Battery Required

Imagine being out and about, only to realize that your phone’s battery life is running dangerously low and there’s nowhere nearby to charge it. Now imagine how liberating it could feel to not have to worry about that. A new cellphone prototype could one day provide such relief because it doesn’t need a battery at all, according to a new study.

The phone, a voice call-only device, is by no means the sexiest cell on the block — the calls crackle and the phone only works within a stone’s throw of a computer that serves as a sort of cell tower. But how does the device work without a battery?

The cellphone requires such little power — only a few microwatts rather than the 100 microwatts a smartphone uses for voice calls —  that the power it does need can be collected from the environment, according to the researchers. A tiny photodiode, smaller than an adult’s pinky nail, collects ambient light while a radio frequency harvester makes it possible to use energy sent out wirelessly from a homemade cell tower, called a base station. [Top 10 Disruptive Technologies]

To make even such a simple-sounding phone — one that doesn’t draw on a battery — required the phone’s developers, a team of researchers from the University of Washington, to overcome a hurdle inherent in other battery-free devices.

The trick others have used to enable devices to work without a battery is to alternate periods of activity with periods of energy collection. That is, the devices would switch off periodically, which, while practical enough for a camera or a temperature sensor, would be maddening for a phone.

To keep their phone working continuously, the researchers chose perhaps a counterintuitive approach: to go analog. The battery-freecellphone incorporates a technology called analog backscatter, a way to absorb or reflect a signal that requires less power than generating a signal, in the same way using a mirror to reflect the light from a flashlight takes less power than generating the light in the first place.

“By doing the signals in an analog way, we actually got the power consumption so low that you never have to turn off your phone,” Vamsi Talla, one of the phone’s designers and a computer science and engineering research associate at the University of Washington, told Live Science. The power-heavy work of converting analog signals into digital ones is then outsourced to the in-lab base station.

“It’s the first type of their system in the world that demonstrates that you are actually able to make a phone call with just a microwatt power consumption,” Pengyu Zhang, a postdoctoral researcher in electrical engineering at Stanford University who was not involved in the study, told Live Science. “That’s amazing.”

Zhang does see one major obstacle in the way of battery-free phones becoming commonplace, though.

“If you look at a real cellphone base station, it involves two links: uplink from the cellphone to the base station and downlink from the base station to the cellphone,” he said. “You have to enable the communication in both directions. However, if you implement the communication — the downlink, where the base station talks to the cellphone — the power consumption of the cellphone is actually very, very high. And I’m not sure how can you enable such capabilities with this design.”

Talla acknowledged there’s still a long way to go, particularly when it comes to integrating the technology into cell towers, but he said he’s hopeful 5G networks — the next level of telecommunications standards — might help make commercial battery-free cellphones a reality. He imagines that someday people might own both a battery-powered smartphone and a battery-free phone.

“Let’s say your phone is low on battery or the battery has died, then you can use this phone, at least, to make an emergency 911 phone call,” Talla told Live Science. “That could be a lifesaver in a lot of scenarios.”

Butterflies with Optical Wings can Help Create Holograms of Light and Real

Holograms have long captured the public’s imagination. Whether it’s Star Wars fans dreaming of holographic messages and chess games, concertgoers standing in awe before a resurrected Tupac Shakur, or theholographic future envisioned in the upcoming Blade Runner 2049, the hologram concept seems to offer something for everyone.

But despite the development of modern, laser-based hologram technology since the 1960s, the only holograms most of us encounter today are the blurry security images on our credit cards or the occasional dimly lit display in a science museum.

Now a team of engineers from the University of Utah claims to have developed a game-changing technology that can cheaply create photorealistic 3D holograms that are viewable with nothing more than a flashlight. In a paper published in Scientific Reports, the researchers explain how they used complex 3D nanostructures to produce holograms with the kind of rich colors and bright display that may one day make sophisticated holograms an everyday reality.

To understand how today’s hologram technology works, it’s helpful to compare it to regular photographs. A photographic camera uses lenses and a natural light source to record the light emitted from a scene on a photographic medium. The result is a 2D image that faithfully matches the original scene from a specific angle or vantage.

A hologram, however, is a recording of the full light field produced by an object in three dimensions. To capture that scattered light field requires a powerful light source like a laser, which is split and directed by mirrors to strike the object from all sides.

Ordinary holograms record the light field on a chemical medium similar to photographic paper, which to the naked eye looks like nothing more than a random collection of dots and lines. To actually produce the holographic image, you need to shine another laser light on or through the recorded hologram. The resulting ghost-like, floating image can then be viewed from many angles.

Conventional hologram technology has some serious limitations, according to Rajesh Menon, associate professor of electrical and computer engineering at the University of Utah and lead author of the new paper. First, the holograms produced by these laser-based systems are very dim and only clearly visible in dark rooms. Second, if you want a hologram with many colors, you need to use lasers in each color, which quickly gets expensive. Then there are issues with the mass-produced sticker-style holograms used for security, which are distorted by a rainbow shimmering effect.

The new process developed by Menon and his team appears to solve all of these issues while greatly reducing the production and display costs. The magic is in the holographic recordings, which are transparent sheets of plastic embossed with a 3D nanostructure of microscopic hills and valleys. Instead of absorbing white light and only reflecting back certain wavelengths, the nanoscale topography of the hologram is engineered to manipulate and tune light so that it produces a bright, full-color 3D image from the simple beam of a flashlight.

The technology is similar to an evolutionary adaptation exhibited in certain butterfly species. Color in nature is usually a product of pigments that absorb certain wavelengths of light and reflect others. But these butterflies boost the brilliance of their iridescent wings by bouncing light across microscales instead of absorbing it. As some wavelengths are canceled out through interference, a brilliant pure blue is reflected back to the viewer.

RELATED: A Nanotech Breakthrough Could Generate True Holograms

Menon explained that his computer-generated microstructures serve a similar purpose, increasing the efficiency and brightness of the hologram by redirecting light rather than absorbing it.

“We take all the colors of light that come in and essentially displace them slightly,” he said. “Let’s say we’re creating an American flag. I want the red here, the blue there, and I want white everywhere else. I can design my structure to essentially displace the colors very efficiently.”

Since the 3D nanostructures can be stamped onto normal plastic, the holograms will be relatively affordable to reproduce, similar to the mass-production of CDs or DVDs. That could help Menon’s holograms compete in the security market. Instead of the rainbow-streaked stickers on credit cards and driver’s licenses, we could soon have photorealistic holograms that are much more difficult to forge.

While the paper only describes the production of 2D holograms, Menon says that his team has also successfully made static 3D holograms using the same technology. But he hasn’t taken his sight off the ultimate goal, which is a full-motion interactive hologram straight out of sci-fi. He said that this initial research points to a path forward, but that many engineering challenges remain.

“To create dynamic images, you need to be able to change the pattern that you’re imprinting as a function of time,” Menon said. “There are technologies that we can borrow upon to do this, but they need some improvement.”

Menon has launched a private company called PointSpectrum to continue developing the hologram technology, which he hopes will soon compete with bulky virtual reality headsets in providing immersive holographic experiences at theme parks, movie theaters, schools, and more.

The upcoming solar eclipse is an opportunity to prove Einstein’s right

For some skywatchers, the upcoming total solar eclipse on Aug. 21 is more than just a chance to catch a rare sight of the phenomenon in the United States. It’s also an opportunity to duplicate one of the most famous experiments of the 20th century, which astrophysicist Arthur Eddington performed in an attempt to prove that light could be bent by gravity, a central tenet of Albert Einstein’s theory of general theory.

Amateur astronomer Don Bruns is among those hoping to re-do the experiment. “I thought of it about two years ago. I thought, surely, other people have done it,” he told Live Science. “But no one had done it since 1973,” Bruns said, when a team from the University of Texas went to Mauritania for the solar eclipse on June 30 of that year.

The group ran into technical problems, though, and could not confirm Eddington’s results with much accuracy. Other attempts — such as one made for an eclipse on Feb. 25, 1952, in Khartoum by the National Geographic Society — fared somewhat better. [10 Solar Eclipses That Changed Science]

In 1915, Einstein published his theory of general relativity, which states that light will bend around massive objects because space itself becomes curved around such objects. A chance to test the theory came several years later, when a total solar eclipse was set to darken skies on May 29, 1919.

For the 1919 eclipse, Eddington led an expedition to measure the deflection of light from stars near the sun in the sky. Observing from Brazil and Africa, simultaneously, Eddington and his colleagues noted that the position of the stars close to the solar limb differed by a small amount from their catalogued positions, agreeing with the predicted 1.75 arc seconds (or 0.00049 degrees). The announcement that the experiment was a success made Einstein famous.

But later analyses of Eddington’s data seemed to suggest that the astrophysicist’s confirmation might not have been the slam dunk he thought it was. Bruns said the debate over Eddington’s data is why he wants to do the experiment again.

“All these experiments, and the best they could get was maybe 10 percent error,” he said. “I think I can get 2 percent.” Modern instrumentation, as well as more accurate measurements of thepositions of the stars, should help refine the measurements needed to replicate Eddington’s experiment this time around, he added.

Bruns is taking few chances; he’s going to a high-altitude location in Wyoming, where he’s likely to have clear skies for the eclipse. And to make sure his telescope’s aim is as accurate as it can be, he plans to stabilize his telescope mount by laying down a concrete slab the day before. “We have some quick-set cement,” he said. The slab will also help ensure his mount is absolutely level.

And Bruns is not alone. Richard Berry, the former editor in chief of Astronomy Magazine, will be using his home-built observatory (known as Alpaca Meadows Observatory) to duplicate the Eddington experiment from Lyons, Oregon. [How to Make a Solar Eclipse Viewer (Photos)]

“I’m working in coordination with Toby Dittrich of Portland Community College and a group of four physics students,” he told Live Science in an email. “Toby will be on the Oregon coast, and one of the students will be in eastern Oregon at the Oregon Star Party. Since I live on the center line [the path of totality], one or two of the students and I will take images for the experiment.”

Berry has taken spectrographic images of the solar corona before, but this experiment is harder, because it involves taking an image of the star field that the sun is located in when the sun isn’t there, and it requires getting a very precise measurement of the stellar positions when the sun is there during the eclipse.

There’s also an outreach element to duplicating the experiment, said Rachel Freed, a science-curriculum consultant at Sonoma State University in Rohnert Park, California. “There’s been a massive move to raise awareness,” she said, in light of the August eclipse being visible across the United States. Sonoma State’s Education and Public Outreach department has a website that describes how amateur astronomers can take part in the event, and what equipment they should use to view the eclipse.

Bradley Schaefer, a professor of astronomy at Louisiana State University in Baton Rouge, has detailed the equipment that skywatchers will need to do a modern-day version of Eddington’s experiment. On Schaefer’s website, he says it’s possible for modern skywatchers to use off-the-shelf equipment and get much better accuracy than Eddington did nearly a century ago. According to the site, his goal is to get many people involved, because more measurements mean better precision and accuracy. In that sense, amateur astronomers could make some real contributions to science during the upcoming solar eclipse.

But even if you don’t plan on conducting any science during the celestial event, an eclipse is still well worth seeing, Freed said — even without any fancy equipment. “During totality, you don’t want to use anything,” she said. “Just look at it, especially if you’ve never seen one.”

LEGO Best Robot for Kids

Toys that teach kids to code are as hot in 2017 as Cabbage Patch Kids were in 1983, and for good reason. For today’s generation of children, learning how to program is even more important than studying a second language. Though there are many robot kits on the market that are designed for this purpose, Lego Boost is the best tech-learning tool we’ve seen for kids. Priced at a very reasonable $159, Boost provides the pieces to build five different robots, along with an entertaining app that turns learning into a game that even preliterate children can master.

Boost comes with a whopping 847 different Lego bricks, along with one motor (which also serves as a dial control on some projects), one light/IR sensor and the Move Hub, a large white and gray brick with two built-in motors that serves as the central processing unit for the robot. The Hub connects to your tablet via Bluetooth, to receive your programming code, and to the other two electronic components via wires.

You can build five different robots with the kit: a humanoid robot named Vernie, Frankie the Cat, the Guitar 4000 (which plays real music), a forklift called the “M.I.R. 4” and a robotic “Auto Builder” car factory. Lego said that it expects most users to start with Vernie, who looks like a cross between film robots Johnny No. 5 and Wall-E and offers the most functionality.

To get started building and coding, kids have to download the Boost app to their iPad or Android tablets. You’ll need to have the app running and connected to the Move hub every time you use the robot. All of the processing and programming takes place on your mobile device, and the sound effects (music, the robot talking) will come out of your tablet’s speaker, not the robot itself.

Lego really understands how young children learn and has designed the perfect interface for them. The Boost app strikes a balance among simplicity, depth and fun. Boost is officially targeted at 7- to 12-year-olds, but the software is so intuitive and engaging that, within minutes of seeing the system, my 5-year-old was writing his own programs and begging me to extend his bedtime so he could discover more.

Neither the interface nor the block-based programming language contains any written words, so even children who can’t read can use every feature of the app. When you launch Boost, you’re first shown a cartoonish menu screen that looks like a room with all the different possible robots sitting in different spots. You just tap on the image of the robot you want to build or program, and you’re given a set of activities that begin with building the most basic parts of the project and coding them.

As you navigate through the Boost program, you need to complete the simplest levels within each robot section before you can unlock the more complicated ones. Any child who has played video games is familiar with and motivated by the concept of unlocking new features by successfully completing old ones. This level-based system turns the entire learning process into a game and also keeps kids from getting frustrated by trying advanced concepts before they’re ready.

Boost runs on modern iPads or Android devices that have at least a 1.4-GHz CPU, 1GB of RAM, Bluetooth LE, and Android 5.0 or above. (I also downloaded Boost to a smartphone, but the screen was so small that it was difficult to make out some of the diagrams.)

Unfortunately, Lego doesn’t plan to list the program in Amazon’s app store, which means you can’t easily use Boost with a Fire tablet, which is the top-selling tablet in the U.S. I was able to sideload Boost onto my son’s Fire 7 Kids Edition, but most users won’t have the wherewithal to do that. Lego makes its Mindstorm app available to Fire devices, so we hope the company will eventually see fit to do the same with Boost.

When you load the Boost app for the first time, you need to complete a simple project that involves making a small buggy before you can build any of the five robots. This initial build is pretty fast, because it involves only basic things like putting wheels onto the car, programming it to move forward and attaching a small fan in the back.

Like the robot projects that come after it, the buggy build is broken down into three separate challenges, each of which builds on the prior one. The first challenge involves building the buggy and programming it to roll forward. Subsequent challenges involve programming the vehicle’s infrared sensor and making the fan in the back move.

After you’ve completed all three buggy challenges, the five regular robots are unlocked. Each robot has several levels within it, each of which contains challenges that you must complete. For example, Vernie’s first level has three challenges that help you build him and use his basic functions, while the second level has you add a rocket launcher to his body and program him to shoot.

If a challenge includes building or adding blocks to a robot, it gives you step-by-step instructions that show you which blocks go where, and only after you’ve gone through these steps do you get to the programming portion.

When it’s time to code, the app shows animations of a finger dragging the coding blocks from a palette on the bottom of the screen up onto the canvas, placing them next to each other and hitting a play button to run the program. This lets the user know exactly what to do at every step, but also offers the ability to experiment by modifying the programs at the end of each challenge.

In Vernie’s case, each of the first-level challenges involve building part of his body. Lego Design Director Simon Kent explained to us that, because a full build can take hours, the company wants children to be able to start programming before they’re even finished. So, in the first challenge, you build the head and torso, then program him to move his neck, while in the later ones, you add his wheels and then his arms.

Like almost all child-coding apps, Boost uses a pictorial, block-based programming language that involves dragging interlocking pieces together, rather than keying in text. However, unlike some programming kits we’ve seen, which require you to read text on the blocks to find out what they do, Boost’s system is completely icon-based, making it ideal for children who can’t read (or can’t read very well) yet.

For example, instead of seeing a block that says, “Move Forward” or “Turn right 90 degrees,” you see blocks with arrows on them. All of the available blocks are located on a palette at the bottom of the screen; you drag them up onto the canvas and lock them together to write programs.

Some of the icons on the blocks are less intuitive than an arrow or a play button, but Boost shows you (with an animation) exactly which blocks you need in order to complete each challenge. It then lets you experiment with additional blocks to see what they do.

What makes the app such a great learning tool is that it really encourages and rewards discovery. In one of the first Vernie lessons, there were several blocks with icons showing the robot’s head at different angles. My son was eager to drag each one into a program to see exactly what it did (most turned the neck).

Programs can begin with either a play button, which just means “start this action” or a condition such as shaking Vernie’s hand or putting an object in front of the robot’s infrared sensor. You can launch a program, either by tapping on its play/condition button or on the play button in the upper right corner of the screen, which runs every program you have on screen at once.

Because the programs are mostly so simple, there are many reasons why you might want to have several running at once. For example, when my son was programming for the guitar robot, he had a program that played a sound when the slider on the neck passed over the red tiles, another one for when it passed over the green tiles and yet another for the blue tiles. In a complex adult program, these would be handled by an if/then statement, but in Boost, there are few loops (you can use them in the Creative Canvas free-play mode if you want), so making several separate programs is necessary.

While the program(s) run, each block lights up as it executes, so you know exactly what’s going on at any time. You can even add and remove blocks, and the programs will keep on executing. I wish all the adult programming tools I use at work had these features!

Though you write programs as part of each the challenges, if you really want to get creative, you need to head to the Coding Canvas mode. In each robot’s menu, to the right of the levels, there’s a red toolbox that you can tap on to write your own custom programs. As you complete different challenges that feature new functions, your Coding Canvas toolbox gets filled up with more code blocks that you can use.

My son had an absolute blast using the Guitar 4000’s toolbox mode to write a program in which moving the slider over the different colors on the guitar neck would play different clips of his voice.

Users who want to build their own custom robots and program them can head over to the Creative Canvas free-play mode by tapping on the open-window picture on the main menu. There, you can create new programs with blocks that control exactly what the Move Hub, IR sensor and motor do. So, rather than showing an icon with a block of a guitar playing like it does from within the Guitar 4000 menus, Boost shows a block with a speaker on it, because you can choose any type of sound from your custom robot.

In both Creative Canvas and Coding Canvas modes, Lego makes it easy to save your custom programs. The software automatically assigns names (which, coincidentally, are the names of famous Lego characters) and colorful icons to each of your programs for you, but children who can read and type are free to alter the names. All changes to programs are autosaved, so you never have to worry about losing your work.

As you might expect from Lego, Boost offers a best-in-class building experience with near-infinite expandability and customization. The kit comes with 847 Lego pieces, which include a combination of traditional-style bricks, with their knobs and grooves, and Technics-style bricks that use holes and plugs.

The building process for any of the Boost robots (Vernie, Frankie the Cat, M.I.R. 4, Guitar 4000 and Auto Builder) is lengthy but very straightforward. During testing, we built both Vernie and the Guitar 4000 robots, and each took around 2 hours for adults to complete. Younger kids, who have less patience and worse hand-eye coordination, will probably need help from an adult or older child, but building these bots provides a great opportunity for parent/child bonding time. My 5 year old (2 years below the recommended age) and I had a lot of fun putting the guitar together.

As part of the first challenge (or first several challenges), the app gives you a set of step-by-step instructions that show which bricks to put where. The illustrated instruction screens are very detailed and look identical to the paper Lego instructions you may have seen on any of the company’s kits. I just wish that the app made these illustrations 3D so one could rotate them and see the build from different angles like you can on UBTech’s Jimu Robots kit app.

All of the bricks connect together seamlessly and will work with any other bricks you already own. You could also easily customize one of the five recommended Boost robots with your own bricks. Imagine adorning Varney’s body with pieces from a Star Wars set or letting your Batman minifig ride on the MIR 4 forklift.

I really love the sky-blue, orange and gray color scheme Lego chose for the bricks that come with Boost, because it has an aesthetic that looks both high-tech and fun. From the orange wings on the Guitar 4000 robot to Vernie’s funky eyebrows, everything about the blocks screams “fun” and “inviting.”

At $159, the Lego Boost offers more for the money than any of the other robot kits we’ve reviewed, but it’s definitely designed for younger children who are new to programming. Older children or those who’ve used Boost for a while can graduate to Lego’s own Mindstorm EV3 kits, which start at $349 and use their own block-based coding language.

Starting at $129, UBTech’s line of Jimu robots offer a few more sensors and motors than Boost, along with a more complex programming language, but they definitely target older and more experienced kids, and to get a kit that makes more than one or two robots, you need to spend over $300. Sony’s Koov kit is also a good choice for older and more tech-savvy children, but it’s also way more expensive than Boost (starts at $199, but you need to spend at least $349 to get most features), and its set of blocks is much less versatile than Legos.

Tenka Labs’ Circuit Cubes start at just $59 and provide a series of lights and motors that come with Lego-compatible bricks, but these kits teach electronics skills, not programming.

The best robot/STEM kit we’ve seen for younger children, Lego Boost provides turns coding into a game that’s so much fun your kids won’t even know that they’re gaining valuable skills. Because it uses real Legos, Boost also invites a lot of creativity and replayability, and at $159, it’s practically a steal.

It’s a shame that millions of kids who use Amazon Fire tablets are left out of the Boost party, but hopefully, Lego will rectify this problem in the near future. Parents of older children with more programming savvy might want to consider a more complex robot set such as Mindstorms orKoov, but if your kid is new to coding and has access to a compatible device, the Boost is a must-buy.