Sunday, 28 July 2013

Natural Gas Boom Rewrites the Energy Rules . Social Jet Lag Mapping the Dark Cosmos Debut of the Mind-Controlled Robots Impatient Futurist: Your Domestic Robot Servant Has Finally Arrived (in a Fashion)

Natural Gas Boom Rewrites the Energy Rules

Fracking has finally challenged coal's dominance.

By Jesse David Jenkins|Friday, December 21, 2012
RELATED TAGS: POLLUTION, ENVIRONMENTAL POLICY
]
Those closures have helped reduce U.S. energy-related carbon emissions to their lowest levels in 20 years. But in the energy world, everything comes at an environmental price. A typical fracking job involves pumping more than a million gallons of water, sand, and chemicals—including carcinogenic benzene, formaldehyde, and lead—into shale rock formations deep below the Earth’s surface to free locked-in natural gas. Improperly drilled wells or faulty well casings can leak fracking fluids and methane gas into nearby aquifers and water wells. Near fracking operations in Pavillion, Wyoming, monitors with the Environmental Protection Agency detected unsafe levels of benzene in water-monitoring wells and methane and various hydrocarbons in public drinking water wells. Moreover, scientists suspect that the injection of used fracking fluid into deep disposal wells may have triggered dozens of recent small earthquakes in northeastern Ohio and north Texas.
Natural gas is inherently cleaner than coal: It emits about half as much carbon per kilowatt-hour of generated electricity and comes without the mercury and many other pollutants that often accompany the burning of coal. But the epa and Cornell researchers confirmed last year that methane leaks from gas wells are a major concern. A report in Nature stated that in some cases the escape of methane, a far more potent greenhouse gas than carbon dioxide, "could effectively offset the environmental edge that natural gas is said to enjoy over other fossil fuels." Armond Cohen, executive director of the Clean Air Task Force, a nonprofit health and environmental advocacy group, cautions that the carbon reductions from the gas boom should not induce complacency about the importance of nuclear, renewable, and other zero-carbon energy sources. "The key goal is managing CO2 emissions down to almost zero," Cohen says.


. Social Jet Lag

Irregular sleep patterns associated with intense weekday work may drive diabetes and obesity. And sleep deprivation boosts the risk of hypertension, Alzheimer's, even cancer.

 


If you ever have the impulse to smash your alarm clock, Ludwig-Maximilians University biologist Till Roenneberg understands. This year he described the increasingly common phenomenon of "social jet lag," experienced by those who sleep short on workdays, then stay up later but sleep longer on weekends. If that is your pattern, you are more likely to be depressed and obese. "Sleep is one of the most underrated phenomena in modern society," Roenneberg says. A growing body of research is showing that if you don't get enough or get it at the wrong times, you expose yourself to a wide range of health consequences.
Eve Van Cauter, an endocrinologist at the University of Chicago, began untangling the connection between sleep deprivation, diabetes, and obesity more than a decade ago. This year her team discovered that sleep deprivation impedes the metabolism of glucose, the sugar that powers the body, in fat cells by a startling 30 percent. Lack of sleep affects appetite, too: A 2012 Swedish brain-scan study identified heightened activity in the right anterior cingulate cortex—a brain region associated with hunger control—in the sleep-deprived. 
Sleep loss is increasingly being implicated in other health 
 
 

Mapping the Dark Cosmos

Dark matter—the unseen stuff that makes up more than four-fifths of the matter in the universe—is finally coming into view.  What we see may change our entire picture of reality.

 

More than 80 percent of the matter in the universe consists of an unknown substance that cannot be seen through any telescope nor detected in any lab. This invisible stuff interacts with normal matter only through gravity, which is how astronomers first inferred its existence. More recently, computer models have demonstrated that dark matter is actually crucial to the visible realm. Without it, galaxies never would have pulled together. There would be no stars. There would be no people. 
Although astronomers still do not know exactly what dark matter is, in 2012 they learned a lot more about how it works. One team traced the way it spreads its tentacles throughout the cosmos. And another found hints that dark matter may not always be invisible after all.
Last January, Ludovic van Waerbeke of the University of British Columbia and Catherine Heymans of the University of Edinburgh announced that they had mapped a web of dark matter more than 1 billion light-years across. "That's the largest map ever made of dark matter," Van Waerbeke says. Although the dark stuff cannot be observed directly, its gravity bends light from any galaxies shining through it. Measuring the amount of bending reveals how much dark matter is present. 
Van Waerbeke and Heymans collected data for more than five years using the Canada-France-Hawaii Telescope atop Mauna Kea in Hawaii. They then analyzed light from 10 million galaxies, noting exactly where concentrations of dark matter distorted the galaxies' appearance. The resulting map shows gigantic clumps and strands of dark matter separated by enormous voids, with all the visible galaxies in the universe embedded in the dark web. The structure closely resembles what computer models predicted, but Van Waerbeke notes that the new map covers less than 0.4 percent of the sky. "It doesn't mean we won't find anything weird when we go to much larger coverage," he says.
Last summer, two astrophysicists from the University of California, Irvine, took another step toward making sense of the dark universe. They detected a stream of gamma rays from the center of our galaxy, the Milky Way, and suggested the radiation might be linked to dark matter. According to some theories, dark matter consists of particles called WIMPs (weakly interacting massive particles) that could destroy each other on contact. If so, whenever dark particles collide, they would release a burst of high-energy radiation. 
Kevork Abazajian and Manoj Kaplinghat found the gamma-ray signal in data collected by NASA’s Fermi Gamma-Ray Space Telescope. They tried to account for it from known objects, but dark matter was also consistent with the observations. The case is far from closed, though. The center of the Milky Way is a violent place, and the sheer intensity of radiation there leaves any interpretation open to question. Abazajian and Kaplinghat continue to mine data from the Fermi telescope, attempting to confirm their interpretation. If they are right, they are seeing levels of reality that go even deeper than the mind-boggling discoveries at the Large Hadron Collider.
"Dark matter is telling us there are fundamental things that we don't understand about physics," says Van Waerbeke. "Maybe we are at the beginning of a complete revolution."

Debut of the Mind-Controlled Robots

Using a brain implant, a paralyzed stroke victim directed a robotic arm to accomplish basic tasks.  People who cannot control their limbs may soon regain movement and independence.



John Donoghue, the director of the Institute for Brain Science at Brown University, could not contain his excitement. For years he had been working on a revolutionary method to pick up brain signals from paralyzed patients and translate them into commands to move mechanical limbs. If all went well in this experiment, Cathy Hutchinson, a 58-year-old woman who lost the use of her limbs in a stroke, would control a robotic arm and hand and use them to lift a bottle of coffee to her mouth—just by thinking. "Guys," Donoghue told his collaborators, "buy the most expensive camera we can afford and shoot this in high-definition. This is a historic moment." 
And so it was. Hutchinson sipped coffee from a bottle, the first time she had served herself in 14 years and the first time a person had ever guided a robotic limb with her thoughts. The achievement was reported in a May 2012 Nature article.
In a sense Donoghue, 63, has been building up to this moment all his life. As a child, he suffered from Leggs-Calvé-Perthes disease, which prevented him from walking for two years. In his first job after college, he worked at the Walter E. Fernald State School, an institution for the mentally handicapped. "I was looking at brains in the lab and then looking out the window at people who had brain diseases that completely took away their humanity, their ability to interact," he says. "I've been trying to understand what the brain is doing because to me the brain is the organ of our humanity. It gives us our mental life, and that makes us what we are." DISCOVER senior editor Kevin Berger spoke with Donoghue in his Brown University office.
You have found a way to help paralyzed people by converting brain signals into computer code to maneuver a robotic arm. How does that work?

Neurons in the brain create electric signals. When the neurons are sufficiently tickled by inputs, they fire electrical impulses called spikes. We have a simple tool to record those spikes, the microelectrode, which has been around since the 1930s. EEG [electroencephalography] electrodes record the neurons' activity from outside the head or on top of the cortex, but the resolution is blurry. It’s sort of like listening from the Goodyear blimp to a crowd in the sports stadium. You need to drop the microphone right next to people's mouths to really hear them. 


So you created that neural microphone—a silicon chip with 100 electrodes implanted in the brain—and connected it with wires to a computer. Then what do you do? 

Once I have the microelectrode array in your motor cortex, the brain's command center for movement, you watch a cursor on a video moving left and right. I then say, "Imagine you’re doing that by moving your hand on a mouse." As you imagine doing this, and the cursor moves to the right, I record the number of spikes, and it's five. And when the cursor moves to the left, it’s two. So now I have a coding model. Five means right, two means left. 


You can tell what I want to do by recording a single neuron? But we have 100 billion neurons in our brain!

That's what's so remarkable. The brain operates over broad networks. There's a tendency to think you've got one neuron that’s saying "left." But if that one cell dies, it doesn’t make any difference, because the message is distributed over many neurons. Of the many million neurons in the motor cortex, most of them have some kind of information about leftness. Now, the code for a single neuron is not so simple. Sometimes imagining left might produce two spikes, sometimes four. It's variable. So we average a set of neurons together. With [my patient] Cathy, we were using a few dozen neurons, and the computer decoded the likelihood that they were signaling "go left" or "go right."

arm
The robot arm controlled by Donoghue's test subject was originally developed to aid amputee veterans.
Jesse Burke
How does that code then tell the robotic arm what to do?

The signal comes out of the computer, and it's converted into electronic commands that the robot arm understands as: move a little bit forward, back, up, down, go left, go right, or open or close the hand.


That process sounds almost magical, controlling a device with your thoughts. But the process wasn't instant or intuitive; you needed the prerecorded sessions with Cathy Hutchinson to establish a map of her brain signals for the computer, right?

Right. We play all the data through the computer, and it gives the commands. Virtually 100 percent of people who hear Cathy's story for the first time think that she’s learning how to control a cursor or robotic arm. In reality, she doesn't learn anything. She tries to control the arm by imagining she's controlling it. And we use the neural data from her brain and the map of her brain signals we've generated to understand what she's trying to do and make that happen.


Does that mean we are safe from thought police hacking into our brains to detect whether we're about to commit a crime?

I don't think we can go there. But what might happen is that we could address schizophrenia, depression, or other psychiatric diseases. I could imagine that with an electrode array in the right location in the brain, we might learn to understand the differences in neuronal spiking between normal and aberrant brains. It may be that a disease forms aberrant patterns, and those patterns lead to the psychosis. This is a little bit sci-fi, but it's in the realm of things that could be done. Imagine you could deliver medication to the site in the brain when it's upset. With hair-thin electrodes, it is now possible to put a pump on the side of them and deliver drugs to the site when there is aberrant activity, quiet it down. 
You have said that your system could eventually be wireless, with implants transmitting signals to a wearable device that would steer the robotic limbs. Could paralyzed people then walk again, using an exoskeleton frame?

Technically, yes, but we don't know a lot yet about the source of leg signals. And the problem of walking is much harder. You have to coordinate both limbs and balance. Another complication is the cosmetic factor. Christopher Reeve once said, "I don’t want to look like a robot." That goes for nearly everybody who is disabled. Also, today's wheelchairs are pretty good. So our focus is the arms. If you can't move your arms, it is extraordinarily debilitating. We want to give paralyzed people back something that is extremely liberating. 


Impatient Futurist: Your Domestic Robot Servant Has Finally Arrived (in a Fashion)






Like many people with limited social skills, I’ve always wanted a robot. And I’ve never been the least put off by the strict movie rule that having a robot can only result in its owner being pushed down the stairs, sucked into the vacuum of outer space, or enslaved with what’s left of humanity. I’m well aware that movie rules are hardly ever wrong, but it hasn’t been fear of betrayal that’s kept me from having a robot helper. It’s been the lack of their existence, in spite of a century of big talk. And this has left me not only without the sort of non–emotion-experiencing companion who could really understand me but also with a lot more laundry, cooking, dirty dishes, and child care than a technophilic citizen of the 21st century should have to put up with.
Useful home robots have always been about 20 years in the future, according to experts—a discouraging estimate, since the same experts assure me every other exciting technology under development is only 5 years away. Yes, I know, you can drive over to Walmart and pick up a carpet-vacuuming “robot” to keep your lawn-mowing “robot” company. While you’re there, why don’t you also grab a “house” in the camping department? I’ve got no interest in keeping company with hundreds of dumb, whirring little things. Scampering scrubbers and pot-stirrers are way too small and stupid to push me down the stairs when I’m not looking.
I’m hardly more impressed with the current small crop of machines that fall into the category of sticking a laptop on a wheeled dress mannequin and calling it a robot. The best you’re going to do there is Luna, a human-size “robot” that will soon be widely available from a company called RoboDynamics in Santa Monica, California, for $3,000—incredibly cheap for a humanoid, but incredibly expensive for a device that can’t do much more than try not to bump into furniture and senior citizens as it desultorily wheels itself around your home, toting a tray of drinks you’ve carefully placed on its precarious, pipe-like “arms.” Don’t count on much more than that from Ava, a forthcoming armless “robot” from iRobot (the Roomba folks) that replaces the laptop head with an iPad head. Please.
No, I’m holding out for something more along the lines of Personal Robot 2, or PR2 to its friends. Now there’s a robot I’d be proud to be enslaved by. Sold by Willow Garage in Menlo Park, California, 2 doesn’t merely slink around your home, it actually does useful stuff. Get this: PR2 can fold laundry, walk and pick up after dogs, and cook a complete breakfast of Weisswurst Frühstück. That’s probably a lot more than you do around the house, assuming you’re not one of those Bavarian superspouses who try to make the rest of us look bad.
And PR2 has viable competition for my enslavement: HERB (a.k.a. Home Exploring Robotic Butler, in keeping with the intergalactic law requiring all robot names to be colorless acronyms), developed by the Personal Robotics Lab at Carnegie Mellon University. HERB can, among many other things, fetch beer, which is critical—any robot I buy that can’t do as much is going straight back to Amazon. What’s more, HERB can pick up and carry around mugs of coffee and later bring the empty mugs to the sink, and has been enlisted at parties to do this all day long. This really impresses me, because it’s what I do all day long, too, and it’s taken me quite a while to get good at it.
So why don’t i consider myself to be living in the age of home robots? I hate to go negative on my future best friends/masters, but I feel obligated to point out their shortcomings. PR2 can do cool things, but only under tightly controlled conditions, and with uneven results. For example, the only laundry it can fold is a towel, and it takes it six minutes to fold a single one (bright sideishly, that’s down from 25 minutes in earlier versions). Also PR2 costs $400,000. That would be a big drawback for me, too, if it weren’t for the generous expense budget I get as a columnist. HERB is similarly limited—it dropped eight mugs during the aforementioned party—and would probably be at least as expensive if it were buyable. Which it isn’t.

No comments:

Post a Comment