The Science Behind Real-Life Invisibility Cloaks

Harry Potter makes it look so easy, but rendering objects invisible is a tricky business, dependent on slights-of-hand and perfectly angled mirrors.

Invisibility — or the concept of turning an object completely and undetectably transparent — is the stuff of make-believe, typically reserved for authors of Medieval fantasy, walls in video games and movies featuring precocious kid wizards with South Londoner accents. Yet it is not all magic rings and fairy dust.

Lately cloaks of invisibility have become a topic of serious scientific discussion and, at least in theory, something scientists can bring about. What we once thought was imaginary may simply be really hard to do.

invisibleman

“What I mean by cloaking an object is that the object becomes fully transparent to visible light, not merely camouflaging or hiding objects,” explains Andrea Alù, an associate professor of engineering at the University of Texas, where he researches the strange behaviors of radio waves and light.

Alù says are plenty of tricks that give the illusion that something is invisible.

Scientists at the University of Rochester, for instance, have developed special lenses that, when looked into, allow professionals like surgeons to gaze through the back of their hands while performing operations, sort of like x-ray vision goggles in real life. In actuality, the illusion is created by an elaborate system of mirrors positioned just right.

invisibleman2

In 2012, Mercedes funded a camouflage technology that in essence hid vehicles in plain sight. It used cameras to capture objects passing behind the car and projected them on a screen placed in front of the car, so that the car appeared to be clear as glass.

invisibletruck

Alù says this is similar to how nautical animals like mimic octopuses disguise themselves in nature, scanning the seabed and projecting the colors and patterns on their skin. Impressive, yes, but it is still a sleight of hand (or tentacle, as it were).

Instead of optical illusions, Alù is proposing a cloaking material with unusual properties that makes it genuinely invisible. Imagine wearing a hypothetical invisible body suit, which would cast no shadow as light gracefully rolled around your legs and hips and body instead of bouncing off of them.

Light is not supposed to work like this.

“The only way to go around our fundamental bounds is to use active cloaks,” says Alù.

This gets complicated, but an active cloak involves curious man-made dyes and molecules known as metamaterials, which send light on a detour around an object. Think of it as taking the bypass instead of driving down the main drag.

lightsmoke

This tactic would effectively render things invisible, but it has some truly paradoxical side effects. Since it would take longer for light to travel all the way around an object — let’s say, your totally incognito invisible house — instead of passing straight through it, there would be a bizarre lag effect, where time would seem to progress at different rates.

If, for instance, you were standing on the curb in front of your house and facing it, the area surrounding your invisible house would look normal. Clouds would blow by and trees would sway on their regular schedule.

However, within the transparent rectangle where your cloaked and invisible house was located, time would appear to be moving noticeably slower, delayed by perhaps more than a few seconds.

“Essentially we’re fighting some fundamental laws of physics,” Alù says, and for that reason making objects legitimately see-thru will be extremely difficult. “You can do it. We do it for radio waves. But it is a really long shot,” he says.

So why aren’t we all out prowling invisibly now?

Well, for the time being, metamaterials (or fluoresce, in scientific terms) only work for a very limited amount time. In laboratories today, only objects no larger than a flea or a speck of dust can be turned invisible.

It may take a hundred years, says Alù, but there may come a day when we can hide inside our invisibility cloak.

Then, science can get to work on levitating broomsticks.

Light smoke via Sergio Alvarez; Invisible Man via Andrew GustarInvisible truck via Matt Green; Invisible Man 2 via Eric Tastad; Invisible woman via splityarn.

january-lighting-light-theme
In this series, we explore how light illuminates, enlivens and even accelerates many aspects of our lives as scientists and artists discover new uses and meaning for the Light in Our Lives.

ooo

Henry Sapiecha

3D IMAGERY THE SOLUTION FOR SMITHSONIAN DISPLAY PROBLEMS

What  you do when you’re the world’s largest museum but can display only two percent of the 137 million items in your collection (a mere 2.75 million) at any given time? In an effort to get more of their treasures into the public eye, specialists at the Smithsonian Institution’s 19 collective museums and galleries hit upon the solution of digitizing their collection and 3D printing key models and displays suitable for traveling exhibitions. It’s a tall order, but one that’s sure to give the rapidly blooming business of additive manufacturing a huge boost.

In the past, whenever curators wanted to duplicate an object, they turned to traditional rubber molds and plaster casts. Now, with the Smithsonian’s budding digitization initiative coming up to speed, teams can deploy expensive minimally-invasive laser scanners to generate virtual models of items in the collection with micron-level accuracy. Large additive manufacturing companies, such as RedEye on Demand, can then take those files and generate actual physical replicas suitable for display or loan to other museums, or even schools. The savings on insurance premiums alone could go a long way toward defraying the cost of the massive scanning project.

The program’s two co-coordinators, Adam Metallo and Vincent Rossi, both with fine art backgrounds, began at the museum as model makers. Eventually they managed to secure a grant for a 3D scanner which they knew could generate far better models when teamed with a quality 3D printer. A recent effort resulted in what the Smithsonian calls the “largest 3D printed museum quality historical replica” in the world – a statue of Thomas Jefferson identical to the one on display at Jefferson’s home, Monticello.

“Our mission,” Rossi told SPAR, “is to digitize these huge collections in 3D – everything from insects to aircraft. Our day-to-day job is essentially trying to figure out how to actually accomplish that.” They’ll certainly have their hands full – the museums’ collections literally fill acres of storage space in several facilities scattered around the region.

Unfortunately, funding for the project is still scarce, so Metallo and Rossi split their time between digitizing artifacts with laser or CT scanners (or open-source cloud-based digitization software and standard digital cameras) and touting their services to the museum’s many researchers, curators and conservators, as well as potential corporate sponsors, hoping to drum up support.

“The one resource we have plenty of is amazing content,” Rossi mused, “and along with that comes frustrating problems for us, but they’re potentially interesting problems for the industry. How do we take 3D digitization and take it to the Smithsonian scale? We’re at the ground floor of trying to understand that.”

Indeed, one major issue with archival scans is how to store the digital files so that they’ll be accessible decades into the future, when formats will surely have changed. With millions upon millions of items yet to be scanned, it appears we’ll just have to wait to see how things shape up on that front.

Rossi and Metallo will report on their Smithsonian work at SPAR International 2012, April 15-18, in Houston.

Source: SPAR Point Group via CNET

Sourced & published by Henry Sapiecha

POINT-AIM-SHOOT-WE KNOW YOUR NAME BECAUSE OF YOUR SKIN TYPE ETC

Online shopping and advertising already do it, take information based on the pages or products that a person had looked at and provide advertisements, or links to other products that may also interest that person.

In just a few years shops could use facial recognition technology to do the same.

A Perth professor is working on research that he hopes could play a role in creating this technology.

Associate Professor Ajmal Mian from the University of Western Australia first became interested in facial recognition technology when doing his PHD which he completed in 2006.

Since then he has continued to research how to use satellite technology to identify facial features that lie under the skin.

It is believed that a dot-sized part of a face may soon be all that is needed to identify a person.

Professor Mian said by incorporating numerous images of a person from different angles into a system, these could possibly be used to later identify that person by just a small section of their face.

He said while facial recognition technology was not new, being able to identify someone from just a small part of their face meant recognition could be done faster and easier.

“To be more useful it has to not be intrusive, so you don’t need to come in contact with it like fingerprinting and the ultimate is to do it without people noticing it’s happening, without them having to stop and look at a camera,” Professor Mian said.

“I am trying to dig out more accurate techniques and find different algorithms to be able to identify people more easily.”

He said a shop may use the technology to maintain a customer database.

“We know security cameras are there but if shops say you need to get fingerprinted, people are not going to want to do that,” Professor Mian said.

He said the technology may not necessarily associate people by their names.

“They may group you by different charts, they don’t necessarily have to attach a name to it, each time you come in they see what you buy, if customer A buys item such-and-such they are most likely to buy item such-and-such, like on Amazon,” he said.

Mr Mian said it was up to marketing staff as to how the information was used.

He said multi-spectral imaging can be used to measure light reflected off a face at hundreds of discrete wavelengths in the visible spectrum and beyond.

This meant that the technology being worked on would be able to recognise a person despite their different facial expressions.

Professor Mian said his research may also be able to detect people who have used cosmetic surgery to alter their looks.

He said he did not expect the technology to be expensive once created.

“Once the algorithm is developed it won’t be expensive, it is the research which is the expensive part, all you will need is a few cameras.”

“It’ll start up in shops that spend a lot of money on customer care and marketing and others will follow.”

He admitted that there would be some concerns about privacy.

“There’s always a concern about security and privacy and there’s always a trade off, it will be a discussion of topic forever,” Professor Mian said.

He said the kind of facial recognition technology he envisioned could be used in security and if used at airports could greatly improve the identification process at the immigration sections of airports.

Professor Mian was also looking into the possibility of applying it to psychology and also identifying whether people had certain syndromes.

Associate Professor Mian is the only West Australian to have won the Australasian Distinguished Dissertation Award from The Computing Research and Education Association of Australasia.

He has also won two prestigious national fellowships: the Australian Postdoctoral Fellowship and the Australian Research Fellowship.

Sourced & published by Henry Sapiecha

VIDEO CAM THE SIZE OF A GRAIN OF SALT IS DISPOSABLE & USED IN MEDICINE

Tiny video cameras mounted on the end of long thin fiber optic cables, commonly known as endoscopes, have proven invaluable to doctors and researchers wishing to peer inside the human body. Endoscopes can be rather pricey, however, and like anything else that gets put inside peoples’ bodies, need to be sanitized after each use. A newly-developed type of endoscope is claimed to address those drawbacks by being so inexpensive to produce that it can be thrown away after each use. Not only that, but it also features what is likely the world’s smallest complete video camera, which is just one cubic millimeter in size.

The prototype endoscope was designed at Germany’s Fraunhofer Institute for Reliability and Microintegration, in collaboration with Awaiba GmbH and the Fraunhofer Institute for Applied Optics and Precision Engineering.

Ordinarily, digital video cameras consist of a lens, a sensor, and electrical contacts that relay the data from the sensor. Up to 28,000 sensors are cut out from a silicon disc known as a wafer, after which each one must be individually wired up with contacts and mounted to a lens.

In Fraunhofer’s system, contacts are added to one side of the sensor wafer while it’s still all in one piece. That wafer can then be joined face-to-face with a lens wafer, after which complete grain-of-salt-sized cameras can be cut out from the two joined wafers. Not only is this approach reportedly much more cost-effective, but it also allows the cameras to be smaller and more self-contained – usually, endoscopic cameras consist of a lens at one end of the cable, with a sensor at the other.

The new camera has a resolution of 62,500 pixels, and it transmits its images via an electrical cable, as opposed to an optical fiber. Its creators believe it could be used not only in medicine, but also in fields such as automotive design, where it could act as an aerodynamic replacement for side mirrors, or be used to monitor drivers for signs of fatigue.

They hope to bring the device to market next year.

Sourced & published by Henry Sapiecha

Medical tech company creates

world’s smallest video camera

Medigus has developed the world’s smallest video camera at just 0.039-inches (0.99 mm) in diameter. The Israeli company’s second-gen model (a 1.2 mm / 0.047-inch diameter camera was unveiled in 2009) has a dedicated 0.66×0.66 mm CMOS sensor from TowerJazz that captures images at 45K resolution (approximately 220 x 220 pixels) and no, it’s not destined for use in tiny mobile phones or covert surveillance devices, instead the camera is designed for medical endoscopic procedures in hard to reach regions of the human anatomy.

The miniature cameras are made with bio-compatible compnents and are suitable for diagnostic and surgical procedures. Potential applications include cardiology, bronchoscopy, gastroenterology, gynecology, and orthopedic and robotic surgery.

“Medical procedures that have not been possible until now become possible with the world’s smallest camera,” said Dr. Elazar Sonnenschein, CEO for Medigus Ltd.

The camera will be integrated into Medigus’ own disposable endoscopic devices as well as sold to third-party manufacturers.

Medigus says it will begin supplying camera samples to US and Japanese manufacturers in coming weeks.

Sourced & published by Henry Sapiecha


CREATING A HUGE WALL POSTER FROM SMALL SECTIONS

It’s hard to say whether this sort of product will unleash a stream of creativity or a gushing torrent of poor taste. Dutch printing company ixxi has come up with an innovative, inexpensive and very nifty way to print and hang large scale artworks. By breaking the photo or design up into lots of smaller cards, which are later joined together for presentation using funky little plastic x and i shaped connectors, ixxi avoids the prohibitive expense of larger scale printing, as well as making it easy to package a wall-sized piece of art up into a small box. In fact, the same technology lets you visit an art gallery, and take a life size, photorealistic replica of your favorite wall fresco home with you, ready to reassemble and hang.

  • The ixxi X connector
  • White cards joined together by ixxi X connectors.
  • Translucent ixxi wall as seen at the Design Academy Eindhoven
  • Translucent ixxi wall as seen at the Design Academy Eindhoven

Just quietly, dear readers, I occasionally fancy myself as a bit of a photographer. In fact, just last week I pulled out a bunch of my favorite snaps (including this one, which really nails the spirit of a mate and his wife) and got them printed on big 100 x 50 cm (39 x 19.7 in) canvas boards to hang on walls around the house.

Canvas prints and photo prints look great, but they’re fairly expensive – a problem that gets exponentially bigger with size. So on a reasonable budget, you might be able to get a couple of boards printed, but you’re up for quite a lot of money if you want to create a whole feature wall.

That’s where ixxi comes in – this Dutch company has created a very simple, classy system that lets you print any number of smallish cards, on a variety of media, then join them together to form larger artworks using i and x shaped connectors.

That lets you break a photograph up into 50 smaller squares and present it on a large scale … or, you can experiment with the form, creating photomosaics or even pixel art.

Once the cards are linked together, you can choose to hang them on a wall, or even dangle them from a roof to make a bespoke room divider or temporary wall. That looks even cooler when you use semi-translucent card material to print on, like they have in the Design Academy Eindhoven – see below.

The results are impressive enough that if you visit the Rijkmuseum Amsterdam, you can buy a number of the museum’s famous artworks in ixxi format – and take them home with you in a small gift box, full size and ready to assemble.

The best part is the price – because you’re only printing on small squares, generally below A4 size, the printing process is uncomplicated and inexpensive … to the point where a gigantic 2 x 2 meter (6.6 x 6.6 foot) print with whatever you want on it comes out at a measly EUR 125.00 – or just US$178 for a mega print that will transform an entire wall in your house. Try pricing one of those up on canvas … and heck, try transporting the thing!

Now, if I could only learn to take a photo or create a pixel artwork worthy of that kind of presentation!

More at the ixxi website.

Sourced & published by Henry Sapiecha

Unseen NASA space pics now available for viewing on line

NASA has released a trove of data from its sky-mapping mission, allowing scientists and anyone with access to the Internet to peruse millions of galaxies, stars, asteroids and other hard-to-see objects.

Many of the targets in the celestial catalog released online this week have been previously observed, but there are significant new discoveries. The mission’s finds include more than 33,000 new asteroids floating between Mars and Jupiter and 20 comets.

NASA launched the Wide-field Infrared Survey Explorer, which carried an infrared telescope, in December 2009 to scan the cosmos in finer detail than previous missions. The spacecraft, known as WISE, mapped the sky one and a half times during its 14-month mission, snapping more than 2.5 million images from its polar orbit.

The spacecraft’s ability to detect heat glow helps it find dusty, cold and distant objects that are often invisible to regular telescopes.

The batch of images made available represents a little over half of what’s been observed in the all-sky survey. The full cosmic census is scheduled for release next (northern) spring.

“The spectacular new data just released remind us that we have many new neighbours,” said Pete Schultz, a space scientist at Brown University, who had no role in the project.

University of Alabama astronomer William Keel has already started mining the database for quasars – compact, bright objects powered by super-massive black holes.

“If I see a galaxy with highly ionized gas clouds in its outskirts and no infrared evidence of a hidden quasar, that’s a sign that the quasar has essentially shut down in the last 30,000 to 50,000 years,” Keel said.

WISE ran out of coolant in October, making it unable to chill its heat-sensitive instruments. So it spent its last few months searching for near-Earth asteroids and comets that should help scientists better calculate whether any are potentially threatening.

The mission, managed by NASA’s Jet Propulsion Laboratory, was hundreds of times more sensitive than its predecessor, the Infrared Astronomical Satellite, which launched in 1983 and made the first all-sky map in infrared wavelength.

AP Sourced & published by Henry Sapiecha

THREE DIMENSIONAL PHOTOS NOW CAN BE TAKEN OF THE SUN
Find Global warming lesson Information Read the facts on global warming.

On October 26, 2006, NASA launched two STEREO (Solar Terrestrial Relations Observatory) spacecraft. Using the Moon’s gravity for a gravitational slingshot, the two nearly identical spacecraft, STEREO-A and STEREO-B, split up with one pulling ahead of the Earth and the other gradually falling behind. It’s taken over four years but on February 6, 2011, the two spacecraft finally moved into position on opposite sides of the Sun, each looking down on a different hemisphere. The probes are now sending back images of the star, front and back, allowing scientists for the first time to view the entire Sun in 3D.

Each of the probes captures images of half of the Sun and beams them back to Earth where researchers combine the two opposing views to create a sphere. To track key aspects of solar activity such as flares, tsunamis and magnetic filaments, STEREO’s telescopes are tuned to four wavelengths of extreme ultraviolet radiation.

Space weather forecasting

The resultant 3D images will allow researchers to improve space weather forecasts to provide earlier and more accurate warnings for potentially damaging coronal mass ejections (CMEs) that can impact aircraft navigation systems, power grids and satellites. Previously, an active sunspot could emerge on the far side of the Sun before the Sun’s rotation turned that region toward Earth, spitting flares and clouds of plasma with little warning.

“Not anymore,” says Bill Murtagh, a senior forecaster at the National Oceanic and Atmospheric Administration’s (NOAA) Space Weather Prediction Center in Boulder, Colorado. “Farside active regions can no longer take us by surprise. Thanks to STEREO, we know they’re coming.”

As part of NASA’s ‘Solar Shield’ project, the NOAA is already using 3D STEREO models of CME’s to improve space weather forecasts, but the full Sun view should improve these forecasts even more. And the forecasting benefits aren’t just limited to Earth. The global 3D model of the Sun also allows researchers to track solar storms heading for other planets, which is important for NASA missions to Mercury, Mars and even asteroids.

“With data like these, we can fly around the Sun to see what’s happening over the horizon—without ever leaving our desks,” says STEREO program scientist Lika Guhathakurta at NASA headquarters. “I expect great advances in theoretical solar physics and space weather forecasting.”

More answers

NASA also expects the 3D images of the Sun to shed light on previously overlooked connections. For instance, researchers have long suspected that solar activity can “go global,” with eruptions on opposite sides of the Sun triggering and feeding off each other. The global images will allow them to actually study the phenomenon.

In conjunction with NASA’s Earth-orbiting Solar Dynamics Observatory, the STEREO-A and STEREO-B probes should be able to image the entire globe of the Sun for the next eight years. Therefore, these initial images are just the beginning of what should be some truly stellar images and movies that NASA says will be released in the weeks ahead as more of the data from the STEREO probes is processed.

Sourced & published by Henry Sapiecha

3D telepresence of people

It may not be a jet powered car, but it’s definitely one we’ve seen in sci-fi movies before – the ability to converse with a life-size holographic image of another person in real time. 3d movies are just the start of it and ther’s more to come.

The futurists at IBM point to recent advances in 3D cameras and movies, predicting that holography chat (aka 3D telepresence) can’t be all that far behind. Already, the University of Arizona has unveiled a system that can transmit holographic images in near-real-time.

It is also predicted that 3D visualization could be applied to data, allowing researchers to “step inside” software programs (wasn’t that just in a movie?), computer models, or pretty much anything else that is limited by a simple 2D screen. IBM compares it to the way in which the Earth appears undistorted when we experience it first-hand in three dimensions, yet it appears pinched at the top and bottom when we see it on a two-dimensional world map.

Maybe travelling inside the blood vessels of the human body is not so silly after all.We will see….

Sourced & published by Henry Sapiecha

ENCRYTION CODE CRACKED FOR CANON CAMERAS

Take note of a Russian programmer who rose to modest fame with his detainment in the United States in 2001: His work helped crack encryption used in Canon cameras.

The programmer and encryption expert is Dmitry Sklyarov, and his company, Elcomsoft, has found a vulnerability in Canon’s OSK-E3 system for ensuring that photos such as those used in police evidence-gathering haven’t been tampered with.

The result is that the company can create doctored photos that the technology thinks are authentic. To illustrate its point, it released a few doctored photos that it says passes the Canon integrity checks.

“The vulnerability discovered by ElcomSoft questions the authenticity of all Canon signed photographic evidence and published photos and effectively proves the entire Canon Original Data Security system useless,” the company said in a statement. Sklyarov presented the findings at the Confidence 2.0 conference last week.

Canon didn’t immediately respond to a request for comment.

Sklyarov discussed his methods in a conference presentation (PDF). In it, he offered some advice on how Canon could fix the issue in future cameras. Along with the technical advice was this: “Hire people who really understand security.”

Wait, which country gave the Statue of Liberty to the U.S. as a present? Another doctored Elcomsoft image.Wait, which country gave the Statue of Liberty to the U.S. as a present? Another doctored Elcomsoft image.

(Credit: Elcomsoft)

Sklyarov’s earlier fame came when the FBI arrested him after presenting information about cracking encryption of an Adobe Systems eBook electronic book format. He was charged with criminal violations of the Digital Millennium Copyright Act (DMCA). Adobe backed off from its support of the case after programmer protests, though, and Sklyarov was acquitted

Sourced & published by Henry SAPIECHA