Monthly Archives: July 2017

NYC to DC in 30 minutes? Elon Musk Verbal Claim OK for Hyperloop

Elon Musk recently announced on Twitter that he had received “verbal government approval” for his Boring Company to build a superfast Hyperloop transit system that would take people from New York to Washington, D.C., in just under 30 minutes.

The extremely rapid transit he’s envisioning will stop in New York, Philadelphia, Baltimore and Washington, D.C., Musk said in a Twitter post, adding that each of these stops would be at the city center, with about a dozen entry and exit elevators for each city.

In further comments on Twitter, Musk said he would start these projects in parallel with an underground-tunnel-building project in Los Angeles, eventually moving on to a Los Angeles-to-San Francisco Hyperloop route, as well as one in Texas. [In Photos: Building the Superfast ‘Hyperloop One’ Transit System of the Future]

“The Boring Company has had a number of promising conversations with local, state and federal government officials. With a few exceptions, feedback has been very positive and we have received verbal support from key government decision-makers for tunneling plans, including a Hyperloop route from New York to Washington DC. We look forward to future conversations with the cities and states along this route and we expect to secure the formal approvals necessary to break ground later this year,” a Boring Company spokesman said in an email to Live Science.

(Live Science has contacted the Department of Transportation to see whether anyone from that government department has verbally OK’d such a project, but has not yet heard back.)

It’s still not clear how far this is from a literal pipe dream. Musk’s comments don’t mention who in government gave their verbal approval and whether they have final say over the project. Saying “yes” in a conversation is also a far cry from supplying all the official permits and legal approvals needed to initiate such construction.

The futuristic Hyperloop would involve people boarding special pods that rocket on a cushion of air inside a low-pressure tube from one location to another. Musk has estimated that a Hyperloop pod could reach speeds of 760 mph (1,220 km/h).

A company called Hyperloop One, which is not affiliated with Musk, is building a test track for the transit system in the Nevada desert. Meanwhile, in Europe, another company has built a 100-foot-long (30 meters) test track in the Netherlands, for a planned Hyperloop that would eventually connect Amsterdam and Paris.

The World Light Laser can Pave the Way for Radiation Under X-Rays

A scientist at work in the Extreme Light Laboratory at the University of Nebraska-Lincoln.The world’s brightest laser — which is so powerful that it can produce light pulses that are 1 billion times brighter than the surface of the sun — can “transform” visible light into X-rays, making the shape and color of objects appear different, new research shows.

These X-rays could be much less harmful than current computed tomography (CT) machines and provide much-higher-resolution images, the researchers said.

In the new study, published online June 26 in the journal Nature Photonics, a team from the University of Nebraska-Lincoln led by physicist Donald Umstadter described an experiment they had conducted using their superpowerful Diocles laser, named after an ancient Greek mathematician. [The 18 Biggest Unsolved Mysteries in Physics]

When directed onto a beam of electrons, the photons of the laser beam started scattering in a completely different way than when illuminated by weaker light, the researchers found.

“Normally, as you turn up the light brightness with the room light dimmer switch, everything in the room looks the same as it did at lower brightness of lighting but just brighter,” said Umstadter, who works at the University of Nebraska-Lincoln’s Extreme Light Laboratory.

When the physicists turned the brightness of the laser to a much higher level, the scattering process changed in a way that would make, for example, objects in a room appear differently.

Scattering is a process in which light particles are deflected from their trajectory after hitting other particles. In the case of the Diocles laser, a single photon would scatter with a single electron, Umstadter said. The electron would, as a result, emit a single photon of light. However, as the light intensity of the laser reached a certain point, every electron started scattering simultaneously with a large number of photons.

“As a result, the electron emitted a photon, which had the sum of all the energies of those photons that were illuminating it, and so the scattered light had a much higher energy than the photons that illuminated it,” Umstadter said. “In fact, the energy was so high that it would be in the X-ray regime of light. It was an X-ray, not a visible photon as our laser is.”

The angle of the emitted light changed, which means an object illuminated with such bright light would suddenly have a different shape, Umstadter said. In addition, the energy of the light, which determines color, changed.

Still, even though it acquired X-ray properties, the light emitted by the electrons behaved differently compared with conventional X-rays. “Typical X-rays are produced by a completely different mechanism, and they look more like a light bulb,” Umstadter said.

“If a light bulb is a white light, it has all colors represented,” Umstadter added. “A laser is typically one color, and it is a very narrow beam — it’s what we call coherent. Our X-rays are much more coherent than typical X-rays, and they have a much higher resolution.”

Umstadter said an imaging system based on the technology would be able to see much smaller details than conventional X-ray machines can. For example, in medical applications, this could lead to the ability to detect changes in tissues, such as cancer tumors, at earlier stages.

Umstadter said that using X rays based on the technology would allow decreasing the dose of radiation up to ten times, which would reduce the patients’ risk of developing cancer.

It is known that even small doses of X rays can increases cancer risk although by a very small amount. The smaller the amount however, the lower the risk.

New 3D Computer Chip Using Nanotech that Can Improve Process Power on operating system

A new type of 3D computer chip that combines two cutting-edge nanotechnologies could dramatically increase the speed and energy efficiency of processors, a new study said.

Today’s chips separate memory (which stores data) and logic circuits (which process data), and data is shuttled back and forth between these two components to carry out operations. But due to the limited number of connections between memory and logic circuits, this is becoming a major bottleneck, particularly because computers are expected to deal with ever-increasing amounts of data.

Previously, this limitation was masked by the effects of Moore’s law, which says that the number of transistors that can fit on a chip doubles every two years, with an accompanying increase in performance. But as chip makers hit fundamental physical limits on how small transistors can get, this trend has slowed. [10 Technologies That Will Transform Your Life]

The new prototype chip, designed by engineers from Stanford University and the Massachusetts Institute of Technology, tackles both problems simultaneously by layering memory and logic circuits on top of each other, rather than side by side.

Not only does this make efficient use of space, but it also dramatically increases the surface area for connections between the components, the researchers said. A conventional logic circuit would have a limited number of pins on each edge through which to transfer data; by contrast, the researchers were not restricted to using edges and were able to densely pack vertical wires running from the logic layer to the memory layer.

“With separate memory and computing, a chip is almost like two very populous cities, but there are very few bridges between them,” study leader Subhasish Mitra, a professor of electrical engineering and computer science at Stanford, told Live Science. “Now, we’ve not just brought these two cities together — we’ve built many more bridges so traffic can go much more efficiently between them.”

On top of this, the researchers used logic circuits constructed fromcarbon nanotube transistors, along with an emerging technology called resistive random-access memory (RRAM), both of which are much more energy-efficient than silicon technologies. This is important because the huge energy needed to run data centers constitutes another major challenge facing technology companies.

“To get the next 1,000-times improvement in computing performance in terms of energy efficiency, which is making things run at very low energy and at the same time making things run really fast, this is the architecture you need,” Mitra said.

While both of these new nanotechnologies have inherent advantages over conventional, silicon-based technology, they are also integral to thenew chip’s 3D architecture, the researchers said.

The reason today’s chips are 2D is because fabricating silicon transistors onto a chip requires temperatures of more than 1,800 degrees Fahrenheit (1,000 degrees Celsius), which makes it impossible to layer silicon circuits on top of each other without damaging the bottom layer, the researchers said.

But both carbon nanotube transistors and RRAM are fabricated at cooler than 392 degrees F (200 degrees C), so they can easily be layered on top of silicon without damaging the underlying circuitry. This also makes the researchers’ approach compatible with current chip-making technology, they said. [Super-Intelligent Machines: 7 Robotic Futures]

Stacking many layers on top of each other could potentially lead to overheating, Mitra said, because top layers will be far from the heat sinks at the base of the chip. But, he added, that problem should be relatively simple to engineer around, and the increased energy-efficiency of the new technology means less heat is generated in the first place.

To demonstrate the benefits of its design, the team built a prototype gas detector by adding another layer of carbon nanotube-based sensors on top of the chip. The vertical integration meant that each of these sensors was directly connected to an RRAM cell, dramatically increasing the rate at which data could be processed.

This data was then transferred to the logic layer, which was implementing a machine learning algorithm that enabled it to distinguish among the vapors of lemon juice, vodka and beer.

This was just a demonstration, though, Mitra said, and the chip is highly versatile and particularly well-suited to the kind of data-heavy, deep neural network approaches that underpin current artificial intelligence technology.

Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research, said he agrees.

“These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction,” he told MIT News.

Advanced Vision Algorithms can Help Robots Learn to See in 3D

Robots are reliable in industrial settings, where recognizable objects appear at predictable times in familiar circumstances. But life at home is messy. Put a robot in a house, where it must navigate unfamiliar territory cluttered with foreign objects, and it’s useless.

Now researchers have developed a new computer vision algorithm that gives a robot the ability to recognize three-dimensional objects and, at a glance, intuit items that are partially obscured or tipped over, without needing to view them from multiple angles.

“It sees the front half of a pot sitting on a counter and guesses there’s a handle in the rear and that might be a good place to pick it up from,” said Ben Burchfiel, a Ph.D. candidate in the field of computer vision and robotics at Duke University.

In experiments where the robot viewed 908 items from a single vantage point, it guessed the object correctly about 75 percent of the time. State-of-the-art computer vision algorithms previously achieved an accuracy of about 50 percent.

Burchfiel and George Konidaris, an assistant professor of computer science at Brown University, presented their research last week at the Robotics: Science and Systems Conference in Cambridge, Massachusetts.

Like other computer vision algorithms used to train robots, their robot learned about its world by first sifting through a database of 4,000 three-dimensional objects spread across ten different classes — bathtubs, beds, chairs, desks, dressers, monitors, night stands, sofas, tables, and toilets.

While more conventional algorithms may, for example, train a robot to recognize the entirety of a chair or pot or sofa or may train it to recognize parts of a whole and piece them together, this one looked for how objects were similar and how they differed.

When it found consistencies within classes, it ignored them in order to shrink the computational problem down to a more manageable size and focus on the parts that were different.

For example, all pots are hollow in the middle. When the algorithm was being trained to recognize pots, it didn’t spend time analyzing the hollow parts. Once it knew the object was a pot, it focused instead on the depth of the pot or the location of the handle.

“That frees up resources and makes learning easier,” said Burchfiel.

Extra computing resources are used to figure out whether an item is right-side up and also infer its three-dimensional shape, if part of it is hidden. This last problem is particularly vexing in the field of computer vision, because in the real world, objects overlap.

To address it, scientists have mainly turned to the most advanced form of artificial intelligence, which uses artificial neural networks, or so-called deep-learning algorithms, because they process information in a way that’s similar to how the brain learns.

Although deep-learning approaches are good at parsing complex input data, such as analyzing all of the pixels in an image, and predicting a simple output, such as “this is a cat,” they’re not good at the inverse task, said Burchfiel. When an object is partially obscured, a limited view — the input — is less complex than the output, which is a full, three-dimensional representation.

The algorithm Burchfiel and Konidaris developed constructs a whole object from partial information by finding complex shapes that tend to be associated with each other. For instance, objects with flat square tops tend to have legs. If the robot can only see the square top, it may infer the legs.

“Another example would be handles,” said Burchfeil. “Handles connected to cylindrical drinking vessels tend to connect in two places. If a mug shaped object is seen with a small nub visible, it is likely that that nub extends into a curved, or square, handle.”

Once trained, the robot was then shown 908 new objects from a single viewpoint. It achieved correct answers about 75 percent of the time. Not only was the approach more accurate than previous methods, it was also very fast. After a robot was trained, it took about a second to make its guess. It didn’t need to look at the object from different angles and it was able to infer parts that couldn’t be seen.

This type of learning gives the robot a visual perception that’s similar to the way humans see. It interprets objects with a more generalized sense of the world, instead of trying to map knowledge of identical objects onto what it’s seeing.

Burchfiel said he wants to build on this research by training the algorithm on millions of objects and perhaps tens of thousands of types of objects.

“We want to build this is into single robust system that could be the baseline behind a general robot perception scheme,” he said.