The Mouse-Box includes a 1.4GHz CPU and 128GB of storage

We’ve seen plenty of compact PCs in the past, but few as tiny as the Mouse-Box. This new invention from a team of Polish engineers packs a fully-functioning computer into a mouse that you can hold in your hand. All you need to add is a screen and a keyboard and you’re ready to start work, get online or launch a presentation. The Mouse-Box is currently at the prototype stage and its makers are looking for funding to bring it to production.

Its level of portability is a huge selling point, because if you’re going somewhere with a monitor or projector available, then all you need is a pocket rather than a laptop bag.

Inside the mouse you get a 1.4GHz dual-core ARM CPU, 128GB of flash storage, two USB 3.0 ports, a micro-HDMI output and Wi-Fi connectivity — not bad for something so tiny. There’s not much room for your programs and files, but as Chromebooks have proved, huge amounts of local storage aren’t always necessary for something that you take on the road. The group is also working on a matching mousepad that can double as a charger while you’re using the Mouse-Box.

A micro-HDMI connector is included to attach the Mouse-Box to a display

If you want to just use it as a normal mouse, then that’s fine too. You could take it into work and then switch between your work PC and the Mouse-Box with a button press, for example.

The prototype is fully functional, but the Mouse-Box team is looking for a manufacturing partner to be able to produce the device commercially. Given its specifications, it’s likely that the finished product would run a lightweight OS such as ChromeOS or Linux, but no further details have been divulged just yet. Another unknown is the price, though the developers have gone on record as saying the Mouse-Box will be available cheaply and across the world, funding permitting.

Is this something you’d back on Kickstarter or Indiegogo? If so, the Mouse-Box inventors would love you to share their video, embedded below.

ooo

Henry Sapiecha

At California’s Lawrence Livermore National Laboratory, the world’s most powerful computers are working on some of our most fundamental questions about the universe. The Sierra supercomputer, for example, is delving into the Big Bang and trying to figure out why elementary particles have mass.

tumblr_inline_moving blue on black image www.sciecearticlesonline.com

But Sierra is also solving problems that are closer to home. This supercomputer and more recently the world’s second most powerful computer called Titan at Oak Ridge National Laboratory in Tennessee have been helping GE engineers to build a better jet engine.

tumblr_inline_image www.sciecearticlesonline.com

This image shows a snapshot from a numerical simulation of a generic aircraft engine injector. Top Image: This animation shows a numerical simulation of a jet fuel spray performed on Sierra in collaboration with Cornell. Researchers used between 500,000 to 1 million CPU hours of simulation time. (One CPU hour is equal to one hour used by one computer processor for simulation.)

Jet engines started out as complicated creatures ever since GE built the first one in the U.S. in 1941, and their design has gotten exponentially more intricate since.

Madhu Pai, an engineer in the Computational Combustion Lab at GE Global Research, is working on an elaborate part in the jet engine combustor called the fuel injector. “It delivers the lifeblood of a jet engine combustor,” he says.

Injectors atomize liquid jet fuel and spray it into the combustion chamber where it burns and generates energy for propulsion. “They are one of the most challenging parts to design and very expensive to produce,” Pai says. (The next-generation LEAP jet engine is the world’s first engine with 3D-printed injectors.)

tumblr_inline_jet nozzel image www.sciencearticlesonline.com

This fuel nozzle for the LEAP jet engine was 3D-printed from a special alloy.

Pai has teamed up with researchers from Arizona State and Cornell universities to use Titan and Sierra to study what exactly happens inside a fuel injector. The time and processing power the engineers have at their disposal is equal to running 10,000 computer processors simultaneously for over 9 months. “The supercomputer gives us a microscopic view of the inside of the injector,” Pai says. “We can study the processes occurring in regions hidden behind the metal or where the fuel spray is too dense. This allows us to better understand the physics behind the design.”

This is physics with practical implications. Pai says that small changes to fuel nozzle geometry could lead to significant changes in engine performance. “These high-fidelity computer simulations help us understand how air and fuel mix and burn, and eventually reduce the number of trials,” Pai says. “Ultimately, we want to build more powerful engines that consume less fuel and have lower emissions.”

Pai’s simulations could also yield new insights beyond jet engines and improve injectors used in locomotives, land-based gas turbines, and potentially find applications in healthcare. “This is just the beginning,” he says.

tumblr_inline_water bubbles on black image www.sciencearticlesonline.com

A still from a supercomputer simulation of a jet fuel spray.

Henry Sapiecha

Take a look at other GE research involving supercomputers here.

Douglas Engelbart, inventor of

the computer mouse, dies at 88

COMPUTER MOUSE PIC

SAN FRANCISCO | Wed Jul 3, 2013 10:28pm EDT

(Reuters) – Douglas Engelbart, a technologist who conceived of the computer mouse and laid out a vision of an Internet decades before others brought those ideas to the mass market, died on Tuesday night. He was 88.

AFL_Product_300x250_01

His eldest daughter, Gerda, said by telephone that her father died of kidney failure.

Engelbart arrived at his crowning moment relatively early in his career, on a winter afternoon in 1968, when he delivered an hour-long presentation containing so many far-reaching ideas that it would be referred to decades later as the “mother of all demos.”

Speaking before an audience of 1,000 leading technologists in San Francisco, Engelbart, a computer scientist at the Stanford Research Institute (SRI), showed off a cubic device with two rolling discs called an “X-Y position indicator for a display system.” It was the mouse’s public debut.

Engelbart then summoned, in real-time, the image and voice of a colleague 30 miles away. That was the first videoconference. And he explained a theory of how pages of information could be tied together using text-based links, an idea that would later form the bedrock of the Web’s architecture.

At a time when computing was largely pursued by government researchers or hobbyists with a countercultural bent, Engelbart never sought or enjoyed the explosive wealth that would later become synonymous with Silicon Valley success. For instance, he never received any royalties for the mouse, which SRI patented and later licensed to Apple.

AAA

He was intensely driven instead by a belief that computers could be used to augment human intellect. In talks and papers, he described with zeal and bravado a vision of a society in which groups of highly productive workers would spend many hours a day collectively manipulating information on shared computers.

“The possibilities we are pursuing involve an integrated man-machine working relationship, where close, continuous interaction with a computer avails the human of radically changed information-handling and -portrayal skills,” he wrote in a 1961 research proposal at SRI.

His work, he argued with typical conviction, “competes in social significance with research toward harnessing thermonuclear power, exploring outer space, or conquering cancer.”

A proud visionary, Engelbart found himself intellectually isolated at various points in his life. But over time he was proved correct more often than not.

“To see the Internet and the World Wide Web become the dominant paradigms in computing is an enormous vindication of his vision,” Mitch Kapor, the founder of Lotus Development Corporation, said in an interview on Wednesday. “It’s almost like Leonardo da Vinci envisioning the helicopter hundreds of years before they could actually be built.”

AAA

By 2000, Engelbart had won prestigious accolades including the National Medal of Technology and the Turing Award. He lived in comfort in Atherton, a leafy suburb near Stanford University.

But he wrestled with his fade into obscurity even as entrepreneurs like Steve Jobs and Bill Gates became celebrity billionaires by realizing some of his early ideas.

In 2005, he told Tom Foremski, a technology journalist, that he felt the last two decades of his life had been a “failure” because he could not receive funding for his research or “engage anybody in a dialogue.”

Douglas Carl Engelbart was born on January 30, 1925 in Portland to a radio repairman father who was often absent and a homemaker mother.

He enrolled at Oregon State University, but was drafted into the U.S. Navy and shipped to the Pacific before he could graduate.

He resolved to change the world as a computer scientist after coming across a 1945 article by Vannevar Bush, the head of the U.S. Office of Scientific Research, while scouring a Red Cross library in a native hut in the Philippines, he told an interviewer years later.

AAA

After returning to the United States to complete his degree, Engelbart took a teaching position at the University of California, Berkeley, after Stanford declined to hire him because his research seemed too removed from practical applications. It would not be the first time his ideas were rejected.

Engelbart also worked at the Ames Laboratory, and the precursor to NASA, the National Advisory Committee for Aeronautics. He obtained a doctorate in electrical engineering from Berkeley in 1955.

He took a job at SRI in 1957, and by the early-1960s Engelbart led a team that began to seriously investigate tools for interactive computing.

After coming back from a computer graphics conference in 1961, Engelbart sketched a design of what would become the mouse and tasked Bill English, an engineering colleague, to carve a prototype out of wood. Engelbart’s team considered other designs, including a device that would be affixed to the underside of a table and controlled by the knee, but the desktop mouse won out.

AAA

SRI would later license the technology for $40,000 to Apple, which released its first commercial mouse with the Lisa computer in 1983.

By the late 1970s, Engelbart’s research group was acquired by a company called Tymshare. In the final decades of his career, Engelbart struggled to secure funding for his work, much less return to the same heights of influence.

“I don’t think he was at peace with himself, partly because many, many things that he forecast all came to pass, but many of the things that he saw in his vision still hadn’t,” said Kapor, who helped fund Engelbart’s work in the 1990s. “He was frustrated by his inability to move the field forward.”

In 1986, Engelbart told interviewers from Stanford that his mind had always roamed in a way that set him apart or even alienated him.

“Growing up without a father, through the teenage years and such, I was always sort of different,” Engelbart said. “Other people knew what they were doing, and had good guidance, and had enough money to do it. I was getting by, and trying. I never expected, ever, to be the same as anyone else.”

He is survived by Karen O’Leary Engelbart, his second wife, and four children: Gerda, Diana, Christina and Norman. His first wife, Ballard, died in 1997.

Computer industry pioneer … Douglas Engelgbart.

 

Douglas Engelbart, the visionary electrical engineer who invented the computer mouse decades before the influx of personal computers into homes and workplaces, has died. He was 88.

 

He died July 2 at his home in Atherton, California, according to SRI International, the research institute founded by Stanford University. The cause was kidney failure, the New York Times reported, citing his wife, Karen O’Leary Engelbart.

 

Engelbart’s work at SRI, then called the Stanford Research Institute, resulted in 21 patents. The last one, No. 3,541,541, filed in 1967 and granted in 1970, was for the computer mouse.

 

Douglas Engelbart's original computer mouse.The first prototype of a computer mouse, as designed by Bill English from Douglas Engelbart’s sketches.

 

“Doug’s legacy is immense,” Curtis R. Carlson, president of SRI, said in a statement. “Anyone in the world who uses a mouse or enjoys the productive benefits of a personal computer is indebted to him.”

 

 

In the patent application, the device was described in technical terms: “An X-Y position indicator control for movement by the hand over any surface to move a cursor over the display on a cathode ray tube, the indicator control generating signals indicating its position to cause a cursor to be displayed on the tube at the corresponding position.”

 

Palm-sized

 

A patent submitted by Douglas Engelbart for the first computer mouse.A patent submitted by Douglas Engelbart for the first computer mouse.

Photo: U.S. PATENT OFFICE

 

He had devised the palm-sized, wheel-based instrument in 1963 as a way to move a computer-screen cursor by means other than arrows on a keyboard. Other alternatives being weighed at the time were a light-pen pointed at the screen, a tracking ball and a joystick.

 

“I remember how my head went back to a device called a planimeter,” another wheel-based device used by engineers to measure irregular geometric areas, he recalled in a 1987 oral-history interview with Stanford University Libraries.

 

His colleague William English, SRI’s chief engineer, led the tinkering and testing of the cursor controller, which was carved from wood and used two perpendicular wheels rather than the roller ball included in subsequent incarnations. English built the first prototype in 1964.

 

On Dec. 9, 1968, at a computer conference in San Francisco, Engelbart unveiled his team’s work in a presentation that became known in tech circles as “the mother of all demos”. During the 90-minute session, linked to his lab by a homemade modem, Engelbart showed off then-novel feats including interactive computing, video conferencing, windows display and hypertext – plus the rectangular, three-button controller he used to control the cursor on the screen.

 

Naming rationale

 

“I don’t know why we call it a mouse,” he told his audience that day. “Sometimes I apologise. It started that way and we never did change it.”

 

The rationale for the name, he said in other interviews, was quite simple: the device resembled the rodent, with its cord as a tail. He said nobody on his team could remember who used the term first.

 

The computer mouse burst into public consciousness in the 1980s after being refined at Xerox’s Palo Alto Research Centre, debuting with little commercial success as part of the Xerox Star computer in 1981, then finally becoming an integral part of computers sold by Apple and International Business Machines (IBM).

 

Over the next three decades the mouse was offered in a rainbow of colors and in different styles: cordless, optical rather than mechanical, designed for left-handed use, ergonomically correct. Logitech International, the world’s biggest computer mouse maker, introduced its first mouse for retail in 1985 and shipped its 500 millionth in 2003 and its billionth in 2008.

 

No royalties

 

“Isn’t that unbelievable?” Engelbart said in a 2004 interview with BusinessWeek, describing his invention’s lasting ubiquity. “My first thought was that you’d think someone would have come up with a more appropriately dignified name for it by now.”

 

Engelbart earned no royalties from his invention. He did win, in 1997, the $US500,000 Lemelson-MIT Prize for inventors, and in 2000, he received the National Medal of Technology and Innovation from US President Bill Clinton.

 

“More than any other person,” said the award citation, “he created the personal computing component of the computer revolution.”

 

Douglas Carl Engelbart was born on Jan. 30, 1925, near Portland, Oregon, the middle child of three of Carl Engelbart, a radio salesman and repairman, and the former Gladys Munson.

 

After two years of university, he was drafted and spent two years in the US Navy, from 1944 to 1946.

 

Chance encounter

 

During a layover on the South Pacific island of Leyte, on the way to his posting in the Philippines as an electronic radar technician, Engelbart found a Red Cross library – “a genuine native hut, up on stilts, with a thatched roof”, he recalled. “You came up a little ladder or stairs, and inside it was very clean and neat. It had bamboo poles and was just really nice looking. There were lots of books, and nobody else there.”

 

It was in that unusual academic venue, he recalled, that he encountered As We May Think, an essay in the Atlantic Monthly by Vannevar Bush, head of US wartime scientific research and development.

 

In it, Bush predicted technological advancements that would lead to breakthroughs in human knowledge, including “a future device for individual use, which is a sort of mechanised private file and library,” on which a person “stores all his books, records, and communications, and which is mechanised so that it may be consulted with exceeding speed and flexibility.”

 

Engelbart recalled, “I remember being thrilled. Just the whole concept of helping people work and think that way just excited me.”

 

Earliest computers

 

After the war, he received a bachelor’s degree in electrical engineering at Oregon State University, in Corvallis, Oregon, in 1948. He spent three years at the federal Ames Research Centre in Mountain View, California, then four years at the University of California-Berkeley, where he earned a Ph.D. in engineering and contributed to building one of the earliest digital computers.

 

According to a biography written by his daughter, Christina Engelbart, by then he was envisioning “people sitting in front of cathode-ray-tube displays, ‘flying around’ in an information space where they could formulate and portray their concepts in ways that could better harness sensory, perceptual and cognitive capabilities heretofore gone untapped. Then they would communicate and collectively organise their ideas with incredible speed and flexibility.”

 

Engelbart joined SRI in 1957 and began accumulating patents, some tracing to his graduate work. He became director of the institute’s laboratory, which he named the Augmentation Research Centre.

 

ARPANET beginnings

 

In 1962 he produced his own influential paper, Augmenting Human Intellect: A Conceptual Framework, for the US Air Force Office of Scientific Research, building off Bush’s work of two decades earlier. The paper earned him a share of research funds distributed through the Defense Department’s Advanced Research Projects Agency, first known as ARPA, then DARPA.

 

The Engelbart-led lab at SRI contributed to creation of the ARPANET computer network, a predecessor of the internet.

 

In 1988, Engelbart left his research job at McDonnell Douglas and, with daughter Christina, set up a nonprofit foundation to advocate his ideas for improving collective knowledge. The foundation started as the Bootstrap Institute and in 2008 became the Doug Engelbart Institute.

 

Engelbart had four children — daughters Gerda, Diana and Christina, and son Norman — with his first wife, the former Ballard Fish, who died in 1997. He married the former Karen O’Leary in 2008.

AAA

Henry Sapiecha

fine gold line

JAPANESE CRAB COMPUTERS ARE HERE, SO CHECK THEM OUT NOW

Wouldn’t your latest generation tablet be way cooler if it ran on live crabs? Thanks to Yukio-Pegio Gunji and his team at Japan’s Kobe University, the era of crab computing is upon us … well, sort of. The scientists have exploited the natural behavior of soldier crabs to design and build logic gates – the most basic components of an analogue computer. They may not be as compact as more conventional computers, but crab computers are certainly much more fun to watch.

Electricity and microcircuits aren’t the only way to build a computer. In fact, electronic computers are a relatively recent invention. The first true computers of the 19th and early 20th centuries were built out of gears and cams and over the years many other computers have forsaken electronics for marbles, air, water, DNA molecules and even slime mold to crunch numbers. Compared to the slime mold, though, making a computer out of live crabs seems downright conservative.

The scientists at Kobe university didn’t just pop down to the market for their crabs. They focused their attention on a particular species: soldier crabs (Mictyris longicarpus). These are found in on the beaches of Australia and surrounding islands where they regularly provide visitors with surreal performances. Individually, the soldier crabs are timid little blue crustaceans that won’t even go into the water, but when they form into swarms, which can number in the tens of thousands, it’s a different matter.

Once set in motion by something like a bird’s shadow passing overhead, the soldier crabs tear off like an army of demented robots. They rush about in a strange, boiling mass that seem like exercises in utter chaos, yet the swarm itself moves in a remarkably consistent straight line. This determined, predictable manner of movement is the key to the crab computer.

When two swarms of soldier crabs collide something remarkable happens. Instead of collapsing into a riotous battle, the two swarms meet in a manner that’s as predictable as a pair of billiard balls hitting each other. When two identical billiard balls collide head on they, ideally and all things being equal, rebound off one another in the opposite direction. If they strike at an angle, they fly away from each other at the opposite angle. It’s all very predictable Newtonian mechanics. In the case of soldier crabs it’s like two balls of soft modelling clay hitting each other. They squash together at the new, larger swarm and head off at the combined angle of the original swarms with a remarkable degree of predictability.

Exploiting this behavior, the Kobe team figured out how to use the crabs to make logic gates. They did this by placing two swarms of crabs in a simple maze. In one configuration, the swarms were set off in two legs of the maze. When they collide, they head off down a third leg. Since the swarms always go in the same direction, if only one swarm is placed in the maze, it will always go down the same output leg as if it had collided with the other swarm and not double back up the other leg. In this way, the maze becomes an OR gate. If one or two swarms enter the maze, the output is always positive. One swarm OR another swarm in the maze equals a positive, otherwise negative.

The researchers also used another maze was in the shape of an X with a fifth vertical leg stuck running up from the center. In this maze, letting loose one swarm resulted in the swarm passing straight through the center and into the opposite leg of the X. If two two swarms are loosed, they collide in the center, sending them up through the center leg. This is the crab equivalent of an AND gate. One swarm going in provides a negative. Two provides a positive. One swarm AND another swarm equals positive, otherwise negative.

With these two gates, it would be theoretically possible to build more complicated logic gates and from there, full-fledged computers.

Currently, there are no plans to build a full-blown crab computer, but if seafood cybernetics ever does take off, this, they will say, it where it all began.

The research was recently outlined in a paper entitled

Robust Soldier Crab Ball Gate [PDF] in the journal Complex Systems

Sourced & published by Henry Sapiecha

DATA STORAGE HAS BECOME EVEN SMALLER

World’s smallest magnetic data storage unit created
If you’re impressed with how much data can be stored on your portable hard drive, well … that’s nothing. Scientists have now created a functioning magnetic data storage unit that measures just 4 by 16 nanometers, uses 12 atoms per bit, and can store an entire byte (8 bits) on as little as 96 atoms – by contrast, a regular hard drive requires half a billion atoms for each byte. It was created by a team of scientists from IBM and the German Center for Free-Electron Laser Science (CFEL), which is a joint venture of the Deutsches Elektronen-Synchrotron DESY research center in Hamburg, the Max-Planck-Society and the University of Hamburg.

Sourced & published by Henry Sapiecha

 

COMPUTER BIO LOGIC GATES FROM BACTERIA

DNA is often referred to as the building block of life. Now scientists from Imperial College London have demonstrated that DNA (and bacteria) can be used to create the fundamental building blocks of a computer – logic gates. Using DNA and harmless gut bacteria, the scientists have built what they claim are the most advanced biological logic gates ever created by scientists. The research could lead to the development of a new generation of microscopic biological computing devices that, amongst other things, could travel around the body cleaning arteries and destroying cancers.

While previous research had already proven biological logic gates could be made, the Imperial College scientists say the big advantage of their creations is that they behave like their electronic counterparts – replicating the way that electronic logic gates process information by either switching “on” or “off.” Importantly, the new biological logic gates are also modular, meaning they could be fitted together to make different types of logic gates and more complex biological processors.

To create a type of logic gate called an “AND gate,” the team used modified DNA to reprogram Escherichia Coli (E.Coli) bacteria to perform the same switching on and off process as its electronic equivalent when stimulated by chemicals. In a similar way to the way electronic components are made, the team demonstrated that the biological gates could be connected together to form more complex components.

The team also created a “NOT gate” and combined it with the AND gate to produce the more complex “NAND gate.” NAND gates are significant because any Boolean function (AND, OR, NOT, XOR, XNOR), which play a basic role in the design of computer chips, can be implemented by using a combination of NAND gates.

The researchers will now try and develop more complex circuitry that comprises multiple logic gates. To accomplish this they will need to find a way to link multiple biological logic gates together that is similar to the way in which electronic logic gates are linked together to enable complex processing to be carried out.

“We believe that the next stage of our research could lead to a totally new type of circuitry for processing information,” said Professor Martin Buck from the Department of Life Sciences at Imperial College London. “In the future, we may see complex biological circuitry processing information using chemicals, much in the same way that our body uses them to process and store information.”

The team also suggests that these biological logic gates could one day form the building blocks of microscopic biological devices, such as sensors that swim inside arteries, detecting the build up of harmful plaque and rapidly delivering medications to the affected area. Other sensors could detect and destroy cancer cells inside the body, while others could be deployed in the environment to monitor pollution and detect and neutralize dangerous toxins.

Sourced & published by Henry Sapiecha

10 products that defined Steve Jobs from Apple

One of the first Apple computers.

1:51pm | Steve Jobs had no formal schooling in engineering, yet he’s listed as the inventor or co-inventor on more than 200 US patents.

Joint co-founder of Apple retires as CEO of the mighty conglomerate which he drove to the top of the IT world.

Sourced & published by Henry Sapiecha

Unseen NASA space pics now available for viewing on line

NASA has released a trove of data from its sky-mapping mission, allowing scientists and anyone with access to the Internet to peruse millions of galaxies, stars, asteroids and other hard-to-see objects.

Many of the targets in the celestial catalog released online this week have been previously observed, but there are significant new discoveries. The mission’s finds include more than 33,000 new asteroids floating between Mars and Jupiter and 20 comets.

NASA launched the Wide-field Infrared Survey Explorer, which carried an infrared telescope, in December 2009 to scan the cosmos in finer detail than previous missions. The spacecraft, known as WISE, mapped the sky one and a half times during its 14-month mission, snapping more than 2.5 million images from its polar orbit.

The spacecraft’s ability to detect heat glow helps it find dusty, cold and distant objects that are often invisible to regular telescopes.

The batch of images made available represents a little over half of what’s been observed in the all-sky survey. The full cosmic census is scheduled for release next (northern) spring.

“The spectacular new data just released remind us that we have many new neighbours,” said Pete Schultz, a space scientist at Brown University, who had no role in the project.

University of Alabama astronomer William Keel has already started mining the database for quasars – compact, bright objects powered by super-massive black holes.

“If I see a galaxy with highly ionized gas clouds in its outskirts and no infrared evidence of a hidden quasar, that’s a sign that the quasar has essentially shut down in the last 30,000 to 50,000 years,” Keel said.

WISE ran out of coolant in October, making it unable to chill its heat-sensitive instruments. So it spent its last few months searching for near-Earth asteroids and comets that should help scientists better calculate whether any are potentially threatening.

The mission, managed by NASA’s Jet Propulsion Laboratory, was hundreds of times more sensitive than its predecessor, the Infrared Astronomical Satellite, which launched in 1983 and made the first all-sky map in infrared wavelength.

AP Sourced & published by Henry Sapiecha

People too complicated

for machines to read thoughts

Nicky Phillips SCIENCE

January 29, 2011

Rolling debate ... experts are undecided about what brain scans can reveal.
Rolling debate … experts are undecided about what brain scans can reveal.

BEFORE the US presidential election in 2008 scientists reported they had, quite literally, peered into the minds of swinging voters.

When a group of people were shown the words ”Democrat” or ”Republican” while undergoing a brain scan they showed high levels of activity in a region called the amygdala.

The scientists concluded that because this region was associated with anxiety, the participants felt that way about the political parties.

Advertisement: Story continues below

The conclusion was strongly resisted by a group of rival neuroscientists who published a response to the study several days after it was reported in The New York Times.

It was not possible to determine whether a person was anxious simply by looking at the activity in a particular brain region, they said. ”This is because brain regions are typically engaged by many mental states, and thus one-to-one mapping between a brain region and a mental state is not possible.”

This stand-off typifies the rolling debate over what brain scans can really show.

To date, many studies claim to have found the regions of the brain for things as diverse as love, sarcasm, sex drive and even voting choice, fuelling the idea that the brain is made up of modules and individual parts.

Brain scans are generally taken with functional magnetic resonance imaging, or fMRI, which has, for the first time, allowed scientists to watch the flow of activity in the brain in real time without cutting open the skull.

But despite the clarity that comes with fMRI, it does not take photographs.

An American psychologist, Diane Beck, said the highlighted region of the brain in an fMRI did not show not a direct measure of that region’s activity.

”The construction of the colourful images we see in journals and magazines are considerably more complicated, and considerably more processed, than the photo-like quality of the images might lead one to believe,” said Dr Beck, of the University of Illinois.

So has fMRI really bridged human understanding of how the thoughts, emotions and feelings of our mind are linked to the soggy, 1.5-kilogram mass of tissue inside the skull?

The debate around fMRI’s powers for probing the mind came to a head in 2009 when an American review found almost half of fMRI studies of emotion and personality had overstated their data linking a specific brain region to an emotion or personality trait.

In a recent article published in the journal Perspectives on Psychological Science, an American psychologist Gregory Miller agreed. ”The rush in recent decades to construe a host of psychological events as being biological events is, at best, premature,” he wrote.

Ulrich Schall, a psychiatrist and psychologist at the University of Newcastle, said fMRI did not directly measure brain activity; instead it measured blood flow in the brain, which increased as neurons became active, and was therefore an indirect measure of their activity.

When someone was performing a specific mental task it was not possible to clearly identify the biological basis of that task in the brain, Associate Professor Schall said. That was just the interpretation of a scientist.

And unless studies were well designed, he said, the interpretation might be meaningless.

But fMRI clearly had a role in studying the brain. It was good for measuring brain development and studying people with mental disorders, he said.

Associate Professor Schall said scientists were confident of the function of primary processing regions of the brain, such as the areas associated with speech, vision and movement.

But scientists were still far away from understanding the basis of more complex cognitive functions such as numeracy, social interactions, intentions of people and planning, he said. ”These things are certainly not localised and need the combination of many parts of the brain.”

Like many scientists, he believed everything that people experienced in their minds, such as thoughts and feelings, had a physical or biological origin.

”But I use the word believe because I don’t have final proof of that,” he said.

Sourced & publ;ishd  by Henry Sapiecha

Fruit fly research could lead

to simpler and more

robust computer network systems

By Grant Banks

21:30 January 17, 2011


Over the years science has gleaned an enormous amount of knowledge from the humble fruit fly. Drosophila melanogaster was used to provide the post-Mendelian foundations for our understanding of genetics and has also been used extensively in neuroscience research. The latest fruit fly-inspired innovation could simplify how wireless sensor networks communicate and stands to have wider applications for computing.

This is not the first time computing systems have been compared to biological systems. Learning from a comparison between Linux and E.coli and using fly’s eyes to help develop faster visual receivers for robots are just two examples. This time round researchers at Carnegie Mellon University (CMU), Pittsburgh, Pennsylvania, have discovered a highly efficient system of organizing cells in the fruit fly’s nervous system develops that stands to have applications in computer networking.

Without communication with surrounding cells or prior knowledge of what these other cells are doing the fly’s developing nervous system is able to organize itself so that a small number become leader cells or sensory organ precursor cells (SOP), while the rest become ordinary nerve cells. The SOPs which connect to adjoining nerve cells do not connect with other SOPs, but instead to the ends of the nervous system that are attached to tiny hairs for interacting with the outside world. What is extraordinary about how this hierarchy of cells organizes itself is the fact that the right number and combination of SOP cells and nerve cells form without the need for complicated information exchange.

The fly’s nervous system uses a probabilistic method to select the cells that will become SOPs. The cells have no information about how they are connected to each other but as various cells self-select themselves as SOPs, they send out chemical signals to neighboring cells that inhibit those cells from also becoming SOPs. This process continues for three hours, until all of the cells are either SOPs or are neighbors to an SOP, and the fly emerges from the pupal stage.

Ziv Bar-Joseph, associate professor of machine learning and computational biology at CMU and author of the report noted that the probability that any cell will self-select increases not as a function of connections, as with a maximal independent set (MIS) algorithm used in computer networking, but as a function of time. The researchers believe that computer networks could be developed using this innovative system creating networks which are much simpler and more robust.

“It is such a simple and intuitive solution, I can’t believe we did not think of this 25 years ago,” said co-author Noga Alon, a mathematician and computer scientist at Tel Aviv University and the Institute for Advanced Study in Princeton, N.J.

Bar-Joseph, Alon and their co-authors – Yehuda Afek of Tel Aviv University and Naama Barkai, Eran Hornstein and Omer Barad of the Weizmann Institute of Science in Rehovot, Israel – developed a new distributed computing algorithm using their findings. The resulting network was shown to have qualities that are well suited for networks in which the number and position of the nodes is not completely certain including wireless sensor networks, such as environmental monitoring, or where sensors are dispersed. They also believe this could be used in systems for controlling swarms of robots.

“The run time was slightly greater than current approaches, but the biological approach is efficient and more robust because it doesn’t require so many assumptions,” Bar-Joseph said. “This makes the solution applicable to many more applications.”

The research was supported in part by grants from the National Institutes of Health and the National Science Foundation.

Sourced & published by Henry Sapiecha