upsalite-sample image www.sciencearticlesonline.com

In an effort to create a more viable material for drug delivery, a team of researchers has accidentally created an entirely new material thought for more than 100 years to be impossible to make. Upsalite is a new form of non-toxic magnesium carbonate with an extremely porous surface area which allows it to absorb more moisture at low humidities than any other known material. “The total area of the pore walls of one gram of material would cover 800 square meters (8611 sq ft) if you would ‘roll them out'”, Maria Strømme, Professor of Nanotechnology at the Uppsala University, Sweden tells Gizmag. That’s roughly equal to the sail area of a megayacht. Aside from using substantially less energy to create drier environments for producing electronics, batteries and pharmaceuticals, Upsalite could also be used to clean up oil spills, toxic waste and residues.

upsalite-1-image www.sciencearticlesonline.comupsalite-2-image www.sciencearticlesonline.com

upsalite-3-image www.sciencearticlesonline.comupsalite-4-image www.sciencearticlesonline.com

Scientists have long puzzled over this particular form of magnesium carbonate since it doesn’t normally occur in nature and has defied synthesis in laboratories. Until now, its properties have remained a mystery. Strømme confesses that they didn’t actually set out to create it. “We were really into making a porous calcium carbonate for drug delivery purposes and wanted to try to make a similarly porous magnesium carbonate since we knew that magnesium carbonate was non-toxic and already approved for drug delivery,” she tells us. “We tried to use the same process as with the calcium carbonate, totally unaware of the fact that researchers had tried to make disordered magnesium carbonates for many decades using this route without succeeding.”

The breakthrough came when they tweaked the process a little and accidentally left the material in the reaction chamber over a weekend. On their return they found a new gel in place. “We realized that the material we had made was one that had been claimed impossible to make,” Strømme adds. A year spent refining the process gave them Upsalite.

While creating a theoretical material sounds like cause for celebration, Strømme says the major scientific breakthrough is to be found in its amazing properties. No other known carbonate has a surface area as large as 800 sq m per gram. Though scientists have created many new high surface area materials with nanotechnology, such as carbon nanotubes and zeolites, what makes Upsalite special is the minuteness of its nanopores.

Each nanopore is less than 10 nanometers in diameter which results in one gram of the material having a whopping 26 trillion nanopores. “If a material has many small pores,” explains Strømme, “it gives the material a very large surface area per gram, which gives the material many reaction sites, i.e. sites that can react with the environment, with specific chemicals, or in the case of Upsalite, with moisture.”

Upsalite’s moisture absorption properties are striking. It was found to absorb 20 times more moisture than fumed silica, a material used for cat box fillers and as an anti-caking agent for moisture control during the transport of moisture sensitive goods. This means that you’d need 20 times less material to do the moisture control job.

Its unique pore structure also opens up new applications in drug delivery. The pores can host drugs that need protection from the environment before being delivered to the human body. It’s also useful in thermal insulation, drying residues from oil and gas industries, and as a dessicant for humidity control. Potential applications are still being discovered as the material undergoes development for industrial use.

The team at Uppsala University is commercializing Upsalite through their spin-off company Disruptive Materials. An article describing the material and its properties can be found at PLOS ONE.

Source: Disruptive Materials

MMMSS

Henry Sapiecha

Senior geneticists and bio-ethicists have agreed with the US spy chief’s claim that genetic engineering could be a serious threat if put to nefarious ends

Gene editing has been made possible by rapid advances in technology-image www.sciencearticlesonline.com

Gene editing has been made possible by rapid advances in technology.

A senior geneticist and a bioethicist warned on Friday that they fear “rogue scientists” operating outside the bounds of law, and agreed with a US intelligence chief’s assertion this week that gene editing technology could have huge, and potentially dangerous, consequences.
Top biologists debate ban on gene-editing
Read more

“I’m very, very concerned about this whole notion of there being rogue clinics doing these things,” geneticist Robin Lovell-Badge told reporters at the American Association for the Advancement of Science (AAAS) conference in Washington DC, referring to the unregulated work of gene scientists. “It really scares me, it’s bad for the field.”

Recent advances in genetics allow scientists to edit DNA quickly and accurately, making research into diseases, such as cystic fibrosis and cancer, easier than ever before. But researchers increasingly caution that they have to work with extreme care, for fear that gene editing could be deployed as bioterrorism or, in a more likely scenario, result in an accident that could make humans more susceptible to diseases rather than less.

Earlier this week the US director of national intelligence, James Clapper, testified before the Senate as part of his worldwide threat assessment report that he considers gene editing one of the six potential weapons of mass destruction that are major threats facing the country, alongside the nuclear prospects of Iran, North Korea and China.

Bioethicist Francoise Baylis, who also spoke at AAAS and who took part in the international summit that debated gene editing last year, said the technology behind gene editing could be dangerous on a global or individual level.

“I think bioterrorism is a reality, and a risk factor we need to take into consideration,” she said. “It’s like any dual-use technology that can be used for good or evil.”

The Dalhousie University professor compared the advances in technology, particularly a tool called Crispr-Cas9, to a hammer in the hands of good and bad actors alike. “It can be the murder weapon, it can be the gavel the judge uses,” she said. “So I don’t know that there’s any way to sort of control that.”

Since its discovery, Crispr-Cas9 has revolutionized gene editing by helping scientists target certain genes with an unprecedented degree of speed and accuracy. The bacteria-originated tool has sparked a patent war among a handful of scientists, and a new industry worth billions.
Advertisement

In the US, members of the intelligence community agreed that gene editing represents a largely open field. Clapper’s report to the Senate cited the easy access, rapid development and weak regulation abroad in its argument that the “deliberate or unintentional misuse” of gene editing technology “might lead to far-reaching economic and national security implications”.

Daniel Gerstein, a former under-secretary at the Department of Homeland Security, said: “It’s interesting that we have something that is clearly a technology that was designed for legitimate biotechnology research which has been associated in this way with weapons of mass destruction.”

But the prospects are simultaneously magnificent and alarming, said Columbia University bioethicist Robert Klitzman, who was happy to see gene editing on the list.

“I think that this is a very powerful technology,” Klitzman said. “I think as a result that there are things that need to be done that have not yet been talked about.”

Research and technology is growing so fast that it is easy to imagine Crispr used for nefarious ends – or as the enabler of a catastrophic accident, said Klitzman.

“The infectious agent responsible for bubonic plague, if altered through Crispr,” he said, “could potentially be used as a WMD. Currently, we have effective treatment against it. But if it were altered, it could potentially become resistant to these treatments and thus be deadly.”

Setting standards on who can buy the technology and using discretion when publishing scientific research could be key, he said. “Just like guns, you need some kind of security check.”
Advertisement

But regulating gene editing would be like trying to govern how people use fire, said Michael Wiles, a senior director at the Jackson Lab in Maine, a leader in growing genetically modified mice for research.

“Every technology has two edges,” Wiles said. “It’s a disturbing but real concept with humans … you can’t control it.”

While intentional abuse of gene editing is ringing alarm bells, some at AAAS were more wary of accidental adverse consequences from reckless gene editing. Lovell-Badge said he particularly fears the kind of work that might go on in labs or fertility clinics where work on human embryos is performed carelessly and without oversight. Such labs, he said, have “popped up in many countries, including the US”, with “no real basis in science or fact, and may be dangerous in some cases”.

Some of these labs might alter particular genes to create so-called “designer babies”, with tailored features that range from height and eye color to disease immunity. But turning a given gene on or off could also affect the genes around it. For example, giving a baby immunity to one disease could mean it’s now vulnerable to other diseases or infections.

Baylis maintained that genetic enhancements of humans are inevitable, even if she could not say what they will be. But she said that unregulated modifications could exacerbate inequality and create “a new eugenics, a different kind of eugenics”.

Other scientists disagreed – on both sides of the debate. Sarah Chan, a University of Edinburgh bioethicist, said fears of inequality are “definitely overblown”, and that “designer babies” are not inevitable. She added that technology that could make diseases more infectious and dangerous has existed for decades, as have the questions around it.

“Some of the fears and concerns surrounding genome editing technology are, if not overblown, perhaps misdirected.”

Taking the contrary opinion, geneticist Robert Winston said: “Regulation cannot prevent this from happening either in the UK eventually or much more likely elsewhere.

“With the power of the market and the open information published in journals,” Winston said, “I am sure that humans will want to try to ‘enhance’ their children and will be prepared to pay large sums to do so.”

TTTT

Henry Sapiecha

 

migrant scientist image www.sciencearticlesonline.com

From 2003 to 2013, the number of scientists and engineers residing in the United States rose from 21.6 million to 29 million. This 10-year increase included significant growth in the number of immigrant scientists and engineers, from 3.4 million to 5.2 million.

Immigrants went from making up 16 percent of the science and engineering workforce to 18 percent, according to a report from the National Science Foundation’s National Center for Science and Engineering Statistics (NCSES). In 2013, the latest year for which numbers are available, 63 percent of U.S. immigrant scientists and engineers were naturalized citizens, while 22 percent were permanent residents and 15 percent were temporary visa holders.

Of the immigrant scientists and engineers in the United States in 2013:

  • 57 percent were born in Asia.
  • 20 percent were born in North America (excluding the United States), Central America, the Caribbean, or South America.
  • 16 percent were born in Europe.
  • 6 percent were born in Africa.
  • And less than 1 percent were born in Oceania.

Among Asian countries, India continued its trend of being the top country of birth for immigrant scientists and engineers, with 950,000 out of Asia’s total 2.96 million. India’s 2013 figure represented an 85 percent increase from 2003.

Also since 2003, the number of scientists and engineers from the Philippines increased 53 percent and the number from China, including Hong Kong and Macau, increased 34 percent.

The NCSES report found that immigrant scientists and engineers were more likely to have earned post-baccalaureate degrees than their U.S.-born counterparts. In 2013, 32 percent of immigrant scientists reported their highest degree was a master’s (compared to 29 percent of U.S.-born counterparts) and 9 percent reported it was a doctorate (compared to 4 percent of U.S.-born counterparts). The most common fields of study for immigrant scientist and engineers in 2013 were engineering, computer and mathematical sciences and social and related sciences.

Over 80 percent of immigrant scientists and engineers were employed in 2013, the same percentage as their U.S.-born counterparts. Among the immigrants in the science and engineering workforce, the largest share (18 percent) worked in computer and mathematical sciences, while the second-largest share (8 percent) worked in engineering. Three occupations — life scientist, computer and mathematics scientist and social and related scientist — saw substantial immigrant employment growth from 2003 to 2013.

STUDY

Henry Sapiecha

Albert-Einstein-at blackboard image www.sciencearticlesonline.com

Rumors are rippling through the science world that physicists may have detected gravitational waves, a key element of Einstein’s theory which if confirmed would be one of the biggest discoveries of our time.

There has been no announcement, no peer review or publication of the findings—all typically important steps in the process of releasing reliable and verifiable scientific research.

Instead, a message on Twitter from an Arizona State University cosmologist, Lawrence Krauss, has sparked a firestorm of speculation and excitement.

Krauss does not work with the Advanced Laser Interferometer Gravitational Wave Observatory, or LIGO, which is searching for ripples in the fabric of space and .

But he tweeted on Monday about the apparent shoring up of rumor he’d heard some months ago, that LIGO scientists were writing up a paper on gravitational waves they had discovered using US-based detectors.

“My earlier rumor about LIGO has been confirmed by independent sources. Stay tuned! Gravitational waves may have been discovered!! Exciting,” Krauss tweeted.

His message has since between retweeted 1,800 times.

If gravitational waves have been spotted, it would confirm a final missing piece in what Albert Einstein predicted a century ago in his theory of general relativity.

The discovery would open a new window on the universe by showing scientists for the first time that  exist, in places such as the edge of black holes at the beginning of time, filling in a major gap in our understanding of how the universe was born.

A team of scientists on a project called BICEP2 (Background Imaging of Cosmic Extragalactic Polarization) announced in 2014 that they had discovered these very ripples in space time, but soon admitted that their findings may have been just galactic dust.

A spokeswoman for the LIGO collaboration, Gabriela Gonzalez, was quoted in The Guardian as saying there is no announcement for now.

“The LIGO instruments are still taking data today, and it takes us time to analyze, interpret and review results, so we don’t have any results to share yet,” said Gonzalez, professor of physics and astronomy at Louisiana State University.

“We take pride in reviewing our results carefully before submitting them for publication—and for important results, we plan to ask for our papers to be peer-reviewed before we announce the results—that takes time too!”

Other observers pointed out that any supposed detection may be a simple practice run for the science teams, not a real discovery.

“Caveat earlier mentioned: they have engineering runs with blind signals inserted that mimic discoveries. Am told this isn’t one,” Krauss tweeted.

But science enthusiasts may have to wait awhile longer to get all the details.

The LIGO team’s first run of data ends Tuesday, January 12.

“We expect to have news on the run results in the next few months,” Gonzalez was quoted as saying by New Scientist magazine.

ooo

Henry Sapiecha

2015 was an amazing year for science, but it was also a year for some amazingly overhyped science.

We put our hearts ahead of our data when speculating about advanced extraterrestrial civilisations. We so wanted to believe that a looming ice age would save us from global warming. And we were horrified to learn that the internet’s favourite meat product might cause cancer, along with everything else in the goddamn universe. Here are the most overhyped scientific discoveries of 2015, in all their glory.

The so-called alien megastructure

The so-called alien megastructure globe image www.sciencearticlesonline.com

www.intelagencies.com

It isn’t an overhyped scientific discoveries list without some wild speculation about extraterrestrials, and 2015 did not disappoint. If you weren’t familiar with the term “alien megastructure” before, you certainly are now.

The alien hullabaloo started in early October, when astronomers announced the discovery of KIC 8462852, a weird star in the Kepler database that flickers aperiodically, its brightness sometimes dropping by as much as 20 per cent. It’s certainly not a transiting planet, but it doesn’t look like anything else we’ve seen, either. Still, nobody outside of the astro community would have given a rat’s arse about the cosmic oddity if SETI researchers hadn’t made this humble suggestion: Perhaps the star was being occluded by a giant, alien construction project, a la Dyson sphere.

The citizens of planet Earth worked themselves into a rabid frenzy over the idea, to the point that Neil deGrasse Tyson had to go on late night TV and tell us all to calm the hell down. SETI astronomers capitalised on the momentum, mobilising state-of-the-art observatories to scour KIC 8462852’s cosmic neighbourhood for the radio signals and laser pulses that would lend credence to the wild idea. They found not a single fingerprint.

The latest thinking is that KIC 8462852 is probably being occluded by a swarm of comets — BORING — but I’m personally holding out hope that somebody follows up on the giant space walrus idea.

[Image: Artist’s representation of a Dyson sphere, crumbling like the alien megastructure hypothesis, via Danielle Futselaar/SETI International]

Bacon cancer

fried bacon image www.sciencearticlesonline.com

www.newcures.info

In October, the world was confronted with some rather unsettling news: bacon, along with other processed meats including hot dogs and ham, is carcinogenic, according to a new scientific paper which evaluated over 800 studies for links between processed or red meat intake and cancer. Unfortunately, many media reports took the “bacon cancer” soundbite and ran with it, leaving readers to imagine that consuming bacon is similar to touching nuclear waste. It’s not.

There are a few reasons we shouldn’t panic about this revelation, as Gizmodo’s George Dvorsky lays out in detail. First and foremost, while the new study did find a real statistical correlation between processed meat consumption and bowel cancer, many subsequent reports failed to identify the magnitude of risk. That turns out to be fairly small. As you might expect, it increases slightly with the amount of processed meat consumed.

To make matters even more confusing, because processed meat is now classified as a Group 1 carcinogen, some articles suggested eating bacon is as bad as smoking cigarettes or asbestos exposure — other Group 1 carcinogens. But again, the Group 1 label has nothing to do with risk magnitude, only the strength of scientific evidence linking a substance to cancer. About 34,000 cancer deaths each year are associated with a diet high in processed meat. Smoking, on the other hand, leads to about a million deaths a year.

If there’s a takeaway in all of this, it’s that it’s probably a good idea to limit your consumption of processed meat — health professionals have been suggesting this for years anyway — and to always be sceptical when reading about new linkages between certain foods and cancer. Because really, when you get down to it, pretty much anything can cause cancer.

Warp drive?!?!?!

Warp drive!!!image www.sciencearticlesonline.com

www.spy-drones.com

It was in 2014 that we first heard whispers of NASA’s EM Drive, an “impossible” engine that could (in theory) accelerate objects (our future spacecraft) to near relativistic speeds without the use of any propellant, simply by bouncing microwaves around a waveguide. The laboratory “evidence” for the physics-defying engine might have been nothing more than analytical error — or, as one expert put it, bullshit — but that didn’t stop people from continuing to scour NASA engineering forums for additional affirmation of the science fictional technology in 2015.

em drive motor image www.sciencearticlesonline.com

Lo and behold, the sleuths of the internet found some. Apparently, the engineers working on the EM Drive decided to address some of the sceptic’s concerns head-on this year, by re-running their experiments in a closed vacuum to ensure the thrust they were measuring wasn’t caused by environmental noise. And it so happens, new EM Drive tests in noise-free conditions failed to falsify the original results. That is, the researchers had apparently produced a minuscule amount of thrust without any propellant.

Once again, media reports made it sound like NASA was on the brink of unveiling an intergalactic transport system.

www.energy-options.info

The real problem with the EM drive isn’t the scientists. It isn’t even the science. The problem is that a) NASA hasn’t claimed that the system works; b) there have been no peer-reviewed papers on the subject; and c) as far as we can tell, all evidence for the physics-defying machine comes from a handful of short-term experiments. This is a story of scientists caught in the act of tinkering by people who want Star Trek to happen now.

[Top image via Star Trek Wiki. EM Drive prototype image via NASA Spaceflight Forum]

An ice age in 2030?

cops on ice skates image www.sciencearticlesonline.com

www.crimefiles.net

You know what would really save us from this global warming mess we’ve gotten ourselves into? An ice age! And earlier this year, it seemed like our prayers were answered, when a new astronomy study suggested that the sun is heading for a period of extremely low solar output — a so called ‘Maunder minimum.’ A press release accompanying the study explained that predictions from the astronomers’ new models “suggest that solar activity will fall by 60 per cent during the 2030s to conditions last seen during the ‘mini ice age’ that began in 1645.”

This led to some confusion.

Even if it’s true that the sun’s output is on the verge of declining to levels not seen in over 350 years — and the likelihood of that varies greatly from study to study — it’s misleading to say we’re on the brink of an ice age. The Little Ice Age saw temperatures drop by about 1º C, whereas real ice ages are characterised by global average temperatures 5º C cooler than today.

It’s also misleading to insinuate that the 17th century Maunder minimum even caused the Little Ice Age. As astronomer Jim Wild explained earlier this year, the Little Ice Age began over a century before the start of the Maunder minimum and continued long after it was over. People still aren’t sure what led to the cold snap — the leading suspect is currently volcanic activity — or if it was even a global phenomenon.

Finally, the overwhelming consensus of the world’s climate scientists is that the influence of solar variability on climate is dwarfed by the impact of increased CO2 in the atmosphere. Indeed, many calculations suggest that a “grand solar minimum” would at best offset a few years’ worth of the warming that’s being caused by human carbon emissions.

Simply put, we cannot bank on the vagaries of the sun to save our collective arses this century.

[Image: London policemen on ice skates on the frozen River Thames circa 1900, via Getty]

The tardigrade’s seriously weird genome

Tardigrades — those weird, wonderful, microscopic poncho bears that're virtually indestructible image www.sciencearticlesonline.com

www.pythonjungle.com

Tardigrades — those weird, wonderful, microscopic poncho bears that’re virtually indestructible — got even weirder this year, when researchers at the University of North Carolina Chapel Hill decided to sequence the tardigrade genome. Astonishingly, the team discovered that a full sixth of the animal’s DNA was not animal DNA at all: it was from plants, fungi, bacteria, and viruses. Nobody had ever seen anything like it before, which in hindsight, maybe should have been a red flag.

As Annalee Newitz explained last month, the authors suggested the tardigrade’s patchwork genetic code was acquired via horizontal gene transfer, and that this could be related to the animal’s unique stress response:

“When tardigrades are desiccated, their DNA breaks into pieces. Any organisms around them will also suffer the same fate. But when water returns to the tardigrade’s environment, they re-hydrate and return to life. As they re-hydrate, their cell walls become porous and leaky, and fragments of DNA from the desiccated organisms around them can flow inside and merge with the animal’s rejuvenating DNA.”

Furthermore, the UNC authors speculated that the tardigrade’s borrowed genes may help the animal withstand everything from boiling water to the vacuum of space. It’s a fascinating story about an amazing organism, so it’s no surprise the paper got a lot of pickup. But it’s not at all clear that the conclusions are sound.

Indeed, less than one week after the UNC Chapel Hill version of the tardigrade genome was published in PNAS, another lab at the University of Edinburgh posted a pre-print of their tardigrade genome analysis, which painted an entirely different picture. Edinburgh researchers found very little evidence for horizontal gene transfer — as few as 36 genes, compared with the 6600 reported by UNC Chapel Hill.

How could this be? One possibility is that many of the sequences the UNC team called bonafide tardigrade genes were, in fact, microbial contamination. As science journalist Ed Yong explains over at The Atlantic, the Edinburgh team carefully cleaned up their data to remove many sequences that were only present in trace quantities, which the scientists presumed to be contaminants. “I want to believe that massive HGT happened, because it would be an awesome story,” Mark Baltrus, lead author of the Edinburgh study told The Atlantic. “But the problem is that extraordinary claims require extraordinary evidence.”

On the bright side, what could have become a bitter dispute between rival labs turned into a fruitful collaboration: the two teams are now sharing their data in an attempt to reconcile their disparate findings.

Science is a messy, error-fraught business — and if we think we’re doing it all right the first time, chances are we’re wrong.

[Image via Sinclair Stammers]

ooo

Henry Sapiecha

hot nickel ball & tinfoil gif clip experiment image www.sciencearticlesonline.com

Everyone’s favorite Red Hot Nickel Ball has tackled challenges from Nokia phones to jawbreakers. Now, for Christmas, the glowing sphere of destruction is giving a warm, holiday hug to a bowl of flame-retardant tinsel. Flame-retardant maybe, but certainly not Red Hot Nickel Ball-proof.

I cannot tell you what that smoke smells like or is made of but it looks pretty nice if you don’t think about how toxic it probably is. Have a very merry Christmas up-wind!

Source: carsandwater

ooo

Henry Sapiecha

Steven Sasson in 1973, the year he started working at Eastman Kodakm image www.sciencearticlesonline.com

Steven Sasson in 1973, the year he started working at Eastman Kodak.

Imagine a world where photography is a slow process that is impossible to master without years of study or apprenticeship. A world without iPhones or Instagram, where one company reigned supreme. Such a world existed in 1973, when Steven Sasson, a young engineer, went to work for Eastman Kodak.

Two years later he invented digital photography and made the first digital camera.

Mr. Sasson, all of 24 years old, invented the process that allows us to make photos with our phones, send images around the world in seconds and share them with millions of people. The same process completely disrupted the industry that was dominated by his Rochester employer and set off a decade of complaints by professional photographers fretting over the ruination of their profession.

It started out innocently enough.

Soon after arriving at Kodak, Mr. Sasson was given a seemingly unimportant task — to see whether there was any practical use for a charged coupled device (C.C.D.), which had been invented a few years earlier.

“Hardly anybody knew I was working on this, because it wasn’t that big of a project,” Mr. Sasson said “It wasn’t secret. It was just a project to keep me from getting into trouble doing something else, I guess.”

The very first digital camera created by Steven Sasson in 1973. This camera was the basis for the US patent issued on December 26, 1978.image www.sciencearticlesonline.com
The very first digital camera created by Steven Sasson in 1973. This camera was the basis for the US patent issued on December 26, 1978.

He quickly ordered a couple of them and set out to evaluate the devices, which consisted of a sensor that took an incoming two dimensional light pattern and converted it into an electrical signal. Mr. Sasson wanted to capture an image with it, but the C.C.D. couldn’t hold it because the electrical pulses quickly dissipated.

To store the image, he decided to use what was at that time a relatively new process — digitalization — turning the electronic pulses into numbers. But that solution led to another challenge — storing it on RAM memory, then getting it onto digital magnetic tape.

The final result was a Rube Goldberg device with a lens scavenged from a used Super-8 movie camera; a portable digital cassette recorder; 16 nickel cadmium batteries; an analog/digital converter; and several dozen circuits — all wired together on half a dozen circuit boards.

It looks strange today, but remember, this was before personal computers – the first build it yourself Apple computer kit went on sale that next year for $666.66.

The camera alone was a historic accomplishment, but he needed to invent a playback system that would take the digital information on the cassette tape and turn it into “something that you could see” on a television set: a digital image.

“This was more than just a camera,” said Mr. Sasson who was born and raised in Brooklyn. “It was a photographic system to demonstrate the idea of an all-electronic camera that didn’t use film and didn’t use paper, and no consumables at all in the capturing and display of still photographic images.”

The camera and the playback system were the beginning of the digital photography era. But the digital revolution did not come easily at Kodak.

“They were convinced that no one would ever want to look at their pictures on a television set.”

Mr. Sasson made a series of demonstrations to groups of executives from the marketing, technical and business departments and then to their bosses and to their bosses. He brought the portable camera into conference rooms and demonstrated the system by taking a photo of people in the room.

“It only took 50 milliseconds to capture the image, but it took 23 seconds to record it to the tape,” Mr. Sasson said. “I’d pop the cassette tape out, hand it to my assistant and he put it in our playback unit. About 30 seconds later, up popped the 100 pixel by 100 pixel black and white image.”

Though the quality was poor, Mr. Sasson told them that the resolution would improve rapidly as technology advanced and that it could compete in the consumer market against 110 film and 135 film cameras. Trying to compare it with already existing consumer electronics, he suggested they “think of it as an HP calculator with a lens.” He even talked about sending images on a telephone line.

Their response was tepid, at best.

“They were convinced that no one would ever want to look at their pictures on a television set,” he said. “Print had been with us for over 100 years, no one was complaining about prints, they were very inexpensive, and so why would anyone want to look at their picture on a television set?”

The main objections came from the marketing and business sides. Kodak had a virtual monopoly on the United States photography market, and made money on every step of the photographic process. If you wanted to photograph your child’s birthday party you would likely be using a Kodak Instamatic, Kodak film and Kodak flash cubes. You would have it processed either at the corner drugstore or mail the film to Kodak and get back prints made with Kodak chemistry on Kodak paper.

It was an excellent business model.

When Kodak executives asked when digital photography could compete, Mr. Sassoon used Moore’s Law, which predicts how fast digital technology advances. He would need two million pixels to compete against 110 negative color film, so he estimated 15 to 20 years. Kodak offered its first consumer cameras 18 years later.

“When you’re talking to a bunch of corporate guys about 18 to 20 years in the future, when none of those guys will still be in the company, they don’t get too excited about it,” he said. “But they allowed me to continue to work on digital cameras, image compression and memory cards.”

The first digital camera was patented in 1978. It was called the electronic still camera. But Mr. Sasson was not allowed to publicly talk about it or show his prototype to anyone outside Kodak.

In 1989, Mr. Sasson and a colleague, Robert Hills, created the first modern digital single-lens reflex (S.L.R.) camera that looks and functions like today’s professional models. It had a 1.2 megapixel sensor, and used image compression and memory cards.

The 1989 version of the digital camera, known as the Ecam (electronic camera) image www.sciencearticlesonline.com

The 1989 version of the digital camera, known as the Ecam (electronic camera). This is the basis of the United States patent issued on May 14, 1991.

But Kodak’s marketing department was not interested in it. Mr. Sasson was told they could sell the camera, but wouldn’t — because it would eat away at the company’s film sales.

“When we built that camera, the argument was over,” Mr. Sasson said. “It was just a matter of time, and yet Kodak didn’t really embrace any of it. That camera never saw the light of day.”

Still, until it expired in the United States in 2007, the digital camera patent helped earn billions for Kodak, since it — not Mr. Sasson — owned it, making most digital camera manufacturers pay Kodak for the use of the technology. Though Kodak did eventually market both professional and consumer cameras, it did not fully embrace digital photography until it was too late.

“Every digital camera that was sold took away from a film camera and we knew how much money we made on film,” Mr. Sasson said. “That was the argument. Of course, the problem is pretty soon you won’t be able to sell film — and that was my position.”

Today, the first digital camera Mr. Sasson made in 1975 is on display at the Smithsonian’s National Museum of American History. President Obama awarded Mr. Sasson the National Medal of Technology and Innovation at a 2009 White House ceremony.

Three years later, Eastman Kodak filed for bankruptcy.

NYT

OOO

Henry Sapiecha

Chinese scientists have developed a new foam-like ‘super material’ image www.sciencearticlesonline.com

Chinese scientists have developed a new foam-like ‘super material’ that is – to use a simile – as light as a balloon yet as strong as metal.

The foam-like material was created when tiny tubes of graphene were formed into a cellular structure which boasted of the same stability as a diamond.

Graphene has attracted great interest among researchers in recent years. And this was what led the researchers at the Chinese Academy of Sciences’ Shanghai Institute of Ceramics to develop the new material.

About 207 times stronger than steel by weight and able to conduct heat and electricity with very high efficiency, the new foam-like material is been designed to support something 40,000 times its own weight without bending, reports science journal, Advanced Materials.

The researchers contend that one piece of the graphene foam can easily withstand the impact of a blow that has a force of more than 14,500 pounds per square inch – almost as much pressure experienced at the world’s deepest depth in the Pacific ocean known as Challenger Deep of the Mariana Trench.

It is for this reason that the Shanghai research team said their newly created material could withstand more external shocks than other previously reported graphene materials.

It could also be squashed to just 5 per cent of its original size and still return to its original shape, and remained intact after the process was repeated 1,000 times.

Primarily destined for military applications, the properties of the novel material implies that it could be used as a cushion under the surface of bulletproof vests or on the outside of tanks to absorb the shocks from incoming projectiles, the Shanghai study said.

2_banner

Henry Sapiecha

boeing-mircolattice-lightweight-material-designboom-image www.sciencearticlesonline.com

Boeing has unveiled a new synthetic metal called the microlattice, a material that’s being hailed as the lightest metal ever made.

Microlattice is a nickel-phosphorus alloy coated onto an open polymer structure. The polymer when removed, leaves a structure consisting of 100 nanometer thick walls of nickel-phosphorus, thus being 99.99% air.

While the structure of microlattice is strong, it is so light that it can be balanced on top of a dandelion. It is about 100 times lighter than Styrofoam and could well be the key component in the future of aeronautical design.

Microlattice’s design is influenced by the human bone structure. It has a 3D open-cellular polymer structure consisting of interconnected hollow tubes, each tube with a wall about 1000 times lighter than human hair.

This arrangement makes the metal extremely light and very hard to crush. Additionally, microlattice’s ultra-low density gives it a unique mechanical behavior, in that it can recover completely from compressions exceeding 50% strain and absorb high amounts of energy.

Sophia Yang, Research Scientist of Architected Materials at HRL Labs who worked with Boeing on the project stated: “One of the main applications that we’ve been looking into is structural components in aerospace.”

Although direct applications for microlattice have not been settled yet, Boeing is looking to use it in structural reinforcement for airplanes – which could reduce the weight of the aircraft significantly and improve fuel efficiency.

Video and image courtesy of Boeing

ooo

Henry Sapiecha

Many scientists perform their research in totally uncharted territories. Some of them flirt with danger on a daily basis. The persistence of a small percentage creates their own demise with overexposure to toxic substances, or by working alone with hazardous equipment. Watch this video showing 10 famous scientists that were killed by their own experiments.
Source: Alltime10s/Youtube

197_banner

Henry Sapiecha