These great publications not only inspired awe and wonder—they allowed us to better understand the machinations of the world in which we live

At its heart, science is about curiosity. So it stands to reason that a book about science should make you examine your world more closely, and in doing so, give you a sense of childhood wonder and whimsy. It should make you say, “Oh, wow.”

But the best science and tech writing goes one step further. With delight and mystery—and sans unnecessary jargon and technical details—this genre can help us better understand some of the world’s most complex and abstract concepts, from gravitational waves (Gravity’s Kiss) to Darwinian evolution (The Evolution of Beauty) to antibiotic resistance (Big Chicken). Each of these remarkable tomes from 2017 does just that, shining a light on the hidden connections and invisible forces that shape the world around us. In doing so, they make our experience of that world that much richer.

Henry Sapiecha

It was a restructuring year in certain ways, as emerging technologies for the enterprise gradually moved forward but didn’t result in as many new targets to track as last year. Yet it’s also abundantly clear the largest digital shifts by far are still ahead of us. Here’s how the year ahead is shaping up.

Enterprise IT in 2017 continues to be highly in tune with consumer technology, but for a change this year we can see a concerted push to shape business-ready options of emerging tech in sizzling new categories. This is especially the case in arenas like blockchain, digital twins, marketing integration solutions, and digital transformation target platforms.

Not too many items fell off this year’s enterprise tech to watch list either, as organizations continue to struggle to adopt the growing raft of relevant new technologies that have steadily arrived on the scene recently.

Consequently, the portfolio of emerging tech that must be managed continues to grow quickly, even as IT spending — and overall tech absorption capacity — is increasing only in the low single digits. This is an untenable proposition that’s putting more and more IT organizations under stress. Most significantly, this is creating service backlogs that are pushing “edge IT” implementation and acquisition — or systems that are not considered mission critical enterprise-wide — out into lines of business for realization as they see fit, as well as fueling so-called shadow IT projects at the departmental level.

As a result, I currently find that IT organizations are seeking novel ways to learn about and adopt emerging tech to ride the exponential curve of digital change. That’s a whole separate conversation however, but one that is becoming urgent as we see the CIO under significant and steady pressure to deliver far more quickly in 2017.

For this year’s list of enterprise tech to watch, I’ve attempted to synthesize data from all available sources, particularly industry trend data. In general, I prefer to include technologies on the list that are expected to grow in the double digits every year for a half decade or more from now. However, I’ll sometimes include important new technology categories if they clearly warrant it based on early importance, even if forecasts aren’t readily available.

As a result, we’ve seen the steady shift from SMAC (social, mobile, analytics, cloud) that dominated this list at its inception to one that is more focused on artificial intelligence, Internet of Things, distributed ledgers, immersive digital experiences (AR/VR), edge computing, low code tools, and much more.

That’s not to say that essentially mainstream technology bases like public cloud, cybersecurity, or big data are staid and therefore are about to come off the list. In fact, they are shifting and evolving more now than ever before and should remain at the top of the technologies that most enterprises should be watching very closely today.

Based on my analysis then, here is the short list of enterprise technologies that organizations should be tracking for building skills, assessing their strategic and tactical impact, experimenting with, and subsequently preparing for wider-scale adoption, often as part of a more systematic program of digital transformation.

As in previous years, I’ve also included a horizons list in this year’s tech to watch, which shows technologies that are almost certainly going to be significant in coming years, but should now be relegated primarily to tracking and monitoring, unless it’s impactful for your core business in the near term.

The enterprise technologies to watch in 2017

In roughly clockwise order, here’s the breakdown, with a brief note on why each enterprise technology to watch this year is significant, with data on its outlook if available:

    • Machine learning.

    • Separating out the topic of machine learning from artificial intelligence is still a tricky task. However, I categorize machine learning as the ability for systems to learn from data in an unsupervised manner and with minimal guidance, while artificial intelligence represents systems that can improve themselves through more abstract reasoning not necessarily dependent on data. It’s tougher to tease apart the forecasts for the two, as they are often lumped together, however one leading report this year, citing the expectation that use of machine learning will become common to support activities in the workplace (a sentiment I very much concur with), expects 43 percent annual growth of the category, which will reach $3.7 billion in revenue by 2021.
    • Contextual computing.
    • The increasing desire to augment productivity and collaboration by supplying information on-demand, usually just as it’s needed and before it’s explicitly asked for, has already become big business. Established industry players such as Apple, Intel, and Nokia are working on and/or offering context-aware APIs already, while a raft of startups is competing to make the early market. Contextual computing is now expected to grow 30 percent annually and reach a market size of a whopping $125 billion by 2023, largely due to widespread use in consumer mobile devices and smart agents.
    • Virtual reality.
    • Still a niche technology despite the support from major industry players such as Samsung with its inexpensive yet high quality Gear VR set and Apple with its new ARKit, virtual reality is poised to grow dramatically for a growing percent of end-user experience as the technology becomes more refined, and especially, less bulky and intrusive. While just a small half billion dollar market today, virtual reality is expected to grow at the blistering pace of 133 percent a year on average, becoming a $35 billion industry by 2021.
    • 3D and 4D printing.
    • While evolving in fits and starts, 3D printing has already become important to a wide range of industries, from aerospace and energy to electronics and even culinary uses. 3D printing is remaking the logistics industry as well by moving manufacturing directly to the point of use and making it on-demand. 3D printing will become a significant industry quite soon, growing by 26 percent annually through 2023 and becoming a $33 billion market. 4D printing, which are objects whose shapes can be changed over time, is a much smaller industry but as a result is growing quickly at 39 percent a year through 2022, where it will likely be a $100 million plus market.
    • 5G wireless.
    • Few mobile technologies are as anticipated as 5G, the next generation of wireless telecommunications standards and infrastructure, which will bring revolutionary bandwidth increases (potentially up to 1GB per second in some cases) and enable new high value business scenarios including immersive virtual reality telepresence, 4K/8K video streaming, and other very high bandwidth uses. While still not expected to commercialize until at least 2019, 5G is widely expected to impact numerous industries and markets, despite real challenges in beaming millimeter waves significant distances, fueling futuristic experimental 5G projects like Google Skybender. 5G spending is expected to grow by 70 percent a year and reach at least $28 billion a year in revenue by 2025.
    • Real-time stream processing and analytics.
    • Best exemplified by software and cloud services like Apache Spark and Amazon Kinesis, the Internet of Things revolution and rich media in general is fueling the need to process and analyze massive amounts of data without any delay — both event metadata and the data itself — that flows in from services and devices on the edge of the network. While not quite the white hot item it was last year, stream processing remains a critical technology for data-driven companies. Stream processing and analytics is expected to grow 33 percent a year through 2025.
    • Wearable IT.
    • The market for enterprise wearables remains quite small and is still limited to niche applications like corporate wellness, hands-free scenarios, situational customer/workforce experiences (typically location-based or rapid notifications), and just-in-time decision making. Yet this belies the anticipated growth of the category, which is expected to expand by over 75 percent a year and become an industry $12 billion in size by 2021.
    • Mobile payments.
    • With Apple Pay’s steady expansion, the rise of Samsung Pay, and the use of mobile devices for payments across the developed and developing world, the smart device is rapidly becoming the wallet of the future. Enterprises must become ready to access these revenue streams and watch the evolution of the industry closely, as revenue flows move to digital channels not controlled as much by traditional financial institutions. Mobile payments are currently expected to grow by 20 percent a year globally and become a $1.7 trillion dollar industry by 2022.
    • Containers.
    • Most well known by their effective bringing to market and popularization by Docker, containers remain a leading on ramp and direct pathway to both cloud computing as well as a more modern and effective model for the design, management, governance, and optimization of IT applications. Considered a contemporary method to architect and operate cloud software today, containers are on the short list of models most organizations are seriously considering for go-forward application models, either buy or build. The growth picture tells the story here with a 40 percent annual growth rate and $2.6 billion market industry size for container-as-a-service by 2020.
    • Mobile business apps.
    • Stubbornly one of the most challenging aspects of enterprise IT, good mobile apps for both internal and external customers remains a challenge for the average organization, yet is critical for the success of their digital experiences. Why are mobile apps so hard? A confluence of reasons: The two main mobile platforms (iOS and Android) are large and complex, and they are still fairly unfamiliar to most of IT, while mobile application management issues, the proliferation of devices and form factors, and security issues round out the barriers to good mobile apps. Mobile apps are expected to grow by 14 percent a year and reach $100 billion in revenue yearly by 2022, yet the enterprise component of that is likely to remain small. Leading organizations can seize opportunity in 2017 and beyond with first mover advantage by providing the mobile experiences their stakeholders want.
    • On-demand, as-a-service, and software-defined everything.
    • In short, everything in the IT business — from security, storage, networking, computing, and applications — is become software-defined and packaged as an on-demand service. While this is nothing new, it can actually be alarming to find that most modern IT offerings want to meter everything to you, and you can no longer purchase it and pay maintenance. This is distressing enough that I’ve had more than one CIO complain to me that it feels like buying all their IT over again every year, an expectation that will have to be managed by vendors. There are so many projections in this space that I’ll just select one overall forecast of IaaS, PaaS, and SaaS as a whole, which is expected to become a $390 billion industry in just 2.5 years, by 2020.
    • Workplace hubs.
    • There is growing interest in streamlining and making the workplace more efficient and centralized. Many of the latest tools — from Slack and Microsoft Teams to IBM Connections (with AppSpokes) and Cisco Spark — are creating powerful new workplace hubs that allow systems of record and systems of engagement to come together more effectively into a consistent and contextual digital workplace, complete with integrated apps. How big is this trend so far? It’s challenging to estimate as there is no dedicated forecasting in this category yet. However, I am included it as a clear industry trend based on the inclusion of these capabilities in most of the latest enterprise collaboration offerings.
    • Edge/fog computing. As Internet of Things and other computing form factors that move data gathering out to the far-flung sides of the network grow in scale and data volume, there has emerged a growing need to put more intelligent processing at the edge of the network, rather than transporting it across the cloud. Instead of a cross-trend to the cloud, edge computing (sometime called fog computing) complements it by putting computing power in cloud-friendly technology packages where it makes the most sense for cost and performance reasons. Edge computing will grow by 35 percent annually through 2023, when it will become a $34 billion industry.
    • Adaptive cybersecurity.
    • Perhaps the real top priority of many CIOs, cybersecurity has assumed a preeminent place in IT strategy and investment, despite being almost exclusively a cost center that keeps the business running and customers safe. Adaptive cybersecurity, which uses a combination of artificial intelligence and other methods to dynamically shift tactics and detect/remove threats as quickly as possible, is among the very forefront of security methods. Adaptive cybersecurity will grow by 15 percent a year and will become a $7 billion industry by 2021.
    • Team collaboration.
    • Smaller scale collaboration has become very popular in the last few years, augmenting the big shift toward enterprise-scale collaboration five years ago or so. The rise of nimbler, more team based tools like Slack has been well documented, as have the dozens of me-too competitors. At the same time, many applications have adopted chat tools within them and consumer services like WhatsApp are used for business on a wide basis. Enterprises have been forced to realize multi-layered collaboration strategies to cope. Nevertheless, it’s clear that the resurgence of team collaboration is here to stay. Overall, the global cloud collaboration market (where the vast majority of team collaboration is offered today) is growing at 13 percent a year and will be a $38 billion industry by 2020.
    • Marketing integration.
    • One of the worst-kept secrets of the marketing technology industry is that almost none of it fits together without manual integration, despite a rapidly expanding multichannel world where this is far-and-away the largest problem that’s currently reported by brands. Yet as I explored recently in the struggles of companies to gain a single view of the customer, the explosion of marketing solutions is making the problem worse, not better. Yet there is no actual category of marketing integration tools, though a good number of solutions apply at least to some of the issue. There will be volumes written about the mismatch between marketing technology availability and actual customer needs today, but we can use marketing automation as a related “stand-in” industry that does some “martech” integration, which will grow at 11 percent a year and be an $8 billion industry by 2025.
    • Digital twins.
    • One of the new entrants to the main list this year, digital twins are software-based replica of business assets, processes, and systems — especially ones based on IoT — that can be used for various purposes such as modeling, forecasting, and business transformation and has been trumpeted prominently by market leaders like GE as a key to successful digital transition. Organizations can increase predictability, lower risk, and test innovation much more quickly using their digital twins. As a very new enterprise concept, there is no publicly available market forecast yet for digital twins, but Gartner has prominently included it in its top 10 strategic tech trends for 2017.
    • Multichannel digital experience.
    • As I explored in the marketing integration pieces, creating a cohesive experience across multiple digital channels (mobile, social, devices, apps, etc.) remains a top challenge for organizations, and one that goes well beyond marketing. Often known as the “omnichannel” problem, the issue is that new digital channels emerge and become important far faster than the response windows of digital experience teams. Digital experience capabilities help outsource the solution to this channel fragmentation issue. Also known as customer experience management (CEM), though I don’t use the term because customer is a misnomer as the digital experience must be managed alike for customers, prospects, suppliers, partners, and the workforce. The digital experience industry will grow by 21 percent a year and become a $13 billion industry by 2021.
    • Microservices.
    • A more refined and fine-grained way to architect modern IT, microservices have gained the upper hand as the leading way to open up data and systems for use and reuse by other parts of the business and for open APIs to 3rd party suppliers and developers. As a key part of the strategic digital ecosystem story, microservices will grow at a reported 16 percent a year and be a $10 billion industry within a few years.
    • Digital transformation target platforms.
    • These are capabilities built on top of enterprise cloud stacks from the likes of Amazon, Microsoft, and Google Cloud that provide patterns, templates, industry accelerators, emerging tech capabilities like blockchain and IoT in business solution frameworks, to provide a proven path through which to implement an enterprise-scale digital transformation. One recent notable example of this product category is SAP Leonardo. I’ll be posting my findings on other top solutions soon. There is no market estimate yet of this brand new digital category.
    • Digital learning.
    • Retiring MOOCs and global solution networks as explicit entrants from last year’s list, which are still important categories, but subsumed into this larger category, digital learning — essential to staff the modern digital enterprise with talent — is shifting to more sophisticated models, from microlearning and adaptive learning systems, even as community-based models remain as important and fast-growing as ever. The overall smart learning market is a juggernaut, as education is generally, and will grow at 25 percent yearly to be a $584 billion industry by 2021.
    • Artificial intelligence.
    • Cognitive systems have become powerful enough to begin cracking some of our most challenging business issues and is at the top of venture capital, acquisition, and enterprise IT priority lists of many organizations. The industry is expected to grow at a 52 percent annual pace and be a $36 billion market by 2025.
    • Customer journey management.
    • Using data to dynamically provide the best quality, adaptive, and personalized customer experience across an organization’s various silos (marketing, sales, operations, customer care, etc.) is the next and more strategic progression of multichannel digital experience. While still allocated to the customer experience management function, it’s a separate concern that can and is often dealt with separately. Again, this is an emerging product category, but in the sense it realizes an effective data-driven customer experience, it will be a 14 percent year-over-year growth category that will turn into a $12 billion enterprise industry by 2023.
    • Internet of Things (IoT) and Internet of Everything (IoE.)
    • As just about everything manufactured object in the world — and quite a few non-manufactured objects which will be instrumented with sensors — is becoming pervasively connected, the number of devices on our networks is set to grow by many orders of magnitude. This creates large business opportunities for organizations ready to capitalize on the global streams of data, analysis, and two-way ability to control and converse that IoT represents. IoE is even more strategic and has become a catch-all phrase that describes adding both connectivity and intelligence to practically every device and connected scenario in order to give them useful smart functions. IoT numbers almost always impress due to their scale. The IoT market will be $267 billion size by 2020, with at least a 20-percent compound annual growth rate (CAGR) at every level of the IoT stack. For its part, IoE is estimated to become a vast $7 trillion industry through a 16 percent growth rate, due to so much of the connected computing universe being attributable to it.
    • Blockchain and distributed ledgers.
    • A complex yet historic amalgam of network, cyrptopgraphy, and database technologies, blockchain and decentralized record systems like it are making big waves in industries like healthcare, insurance, and especially finance, given blockchain’s roots in Bitcoin. While many organizations are grappling with the implications of decentralized, open record keeping to their business models, the writing is on the wall: Most legacy transaction logging systems that are closed and proprietary are likely nearing the end of their useful lifetime. Blockchain and related models for digital ledgers are expected to grow at a 58 percent annual rate and create a $5.4 billion dollar market by 2023.
    • Social business.
    • Long a combined technology and mindset approach to create more connected and effective communities and organizations, social business remains the most strategic set of ideas and tools to create modern organizations using new communications and collaboration methods. Along the way, the approach has logged hard data on its benefits. While the term itself is aging out, the practice remains at at all time high in organizations and is growing steadily at a 26 percent annual rate through platforms like enterprise social networks and social business analytics.
    • Open APIs.
    • Part and parcel with the microservices discussion, which it now has a lot of overlap with, open APIs have come of age to open up IT for reuse and remixing within organizations and especially out to developer communities and business partners. I’ve been sanguine about this approach for a decade and it’s finally matured into a major industry. While APIs represent many types of technologies and approaches, one key barometer is API management platforms, which will be a $3.4 billion market by 2023 via a 33 percent annual growth rate.
  • Collaborative economy.
  • Also known as the sharing economy, the approach for using the Web as a platform to exchange goods and services more directly and democratically has had its up and downs over the years. Although the implications of the collaborative economy, originally coined by Jeremiah Owyang, go to the very heart of business models and have disrupted entire industries from hospitality to transportation, its proven a bit harder a model to repeatably delivery on that some originally thought, even though my opinion is that most industries have yet to feel the brunt of it. That said, respected organizations like the Brookings Institute have pegged the sharing economy at a whopping $335 billion in yearly revenue by 2025. Consequently, it very much belongs as a core, though existentially challenging, technology on this list again this year.

The upshot is that there are a great many technologies on the enterprise tech to watch list, an all-time high in fact, never mind the horizon list, which is poised to be even more disruptive in many cases. As I pointed out a few years ago, technology cycles are coming more and more quickly and fixed, traditional strategic planning cannot take them into account adequately.

For most organizations, this will mean all new ways of thinking about and managing the technology adoption life cycle. Fortunately, we have fresh choices and new ways of activating forces for change at scale that do seem to be able to better accommodate the size and scope of the challenge at hand. In the meantime, we live in very exciting times indeed, even though it’s still literally just the dawn of digital technology in the enterprise.

Sourced from >> By for Enterprise Web 2.0

Additional reading

Henry Sapiecha

Edward-Joseph-Lofgren-scientist image

A pioneering physicist at the Lawrence Berkeley National Laboratory who helped build a key tool for studying the universe and played a role in the project that created the first atomic bomb has died, a lab official said Thursday.

Edward Joseph Lofgren led the development, construction and operation of the Bevatron, an early particle accelerator at the lab. A giant machine that smashes atoms, it was used to find the antiproton, a discovery which led to a Nobel Prize. This research helped scientists study how today’s universe was created and grew.

Lofgren also was involved in the Manhattan project, the federal government’s successful effort to build an atomic bomb.

Lofgren died in Oakland, California, on Sept. 6, lab spokesman Glenn Roberts Jr. said. He was 102.

Before his retirement in 1979, he also served as associate laboratory director, and he was the first director of the newly formed accelerator division.

Born Jan. 18, 1914, and the youngest of seven in a family of Swedish immigrants, he moved to Los Angeles at age 13 and finished high school. He later enrolled at UC Berkeley, arriving by bus with two suitcases and $200. He had read about and become increasingly interested in its Radiation Laboratory and the cyclotron developments there.

He earned an undergraduate degree in 1938 and then enrolled as a graduate student. In 1940 he joined the Radiation Laboratory’s staff as a research assistant. One of his duties was assisting in the development of techniques for medical isotope production.

Lofgren left his graduate studies to become a full-time employee of the Radiation Lab and led development of the ion sources for the Calutron. He spent much of the early war years in Oak Ridge, Tennessee, assisting in the development of the Calutron farm there to enrich uranium-235 for the Manhattan Project, which built the first atomic bomb, according to friend and former colleague Jose Alonso.

Lofgren moved in fall 1944 to Los Alamos, New Mexico, where he joined a group working on detonators for the atomic bomb, the Lawrence Berkeley National Laboratory website says. He eventually became the group’s leader, the website says. Lofgren was at the Trinity atomic bomb test in New Mexico, manning a radiation-monitoring station six miles from ground zero, according to the website.

He earned his doctorate from UC Berkeley in June 1946.

Alonso, who worked for Lofgren for five years, but knew him for more than 40 years, said that even a week before his death his innate interest in the world hadn’t faltered. Alonso recalled how Lofgren was explaining how San Francisco fog was generated and why it was there.

“He was always wanting to teach,” Alonso said.

His daughter, Claire Lofgren, agreed. “As kids, he had a big love of the natural world and throughout his adult life he was a supporter of (the environment) and he would take us to all these wild places,” she said.

He would explain the phases of the moon to his children among other things, she recalled. She said she once asked him what led him to become a physicist. He explained to her that as a child of 5 or 6 he was laying under a tree watching the branches blow.

“He was watching the tree move and wondering if the tree was making the wind or the wind was making the tree move,” she said. “Those types of questions never left him.”

He is preceded in death by Lenore Lofgren, his first wife and the mother of his three children; and Selma Lofgren, his second wife. Lofgren is survived by his three daughters: Helen Lofgren, Laurel Phillipson and Claire Lofgren; four grandchildren; and two great-grandchildren.


Henry Sapiecha

An analysis of the world’s most valuable scientific documents and manuscripts, and it illustrates both how far science has come in a relatively short time, and how little we value our legacy in monetary terms

This is the second of a six-part series covering the most valuable scientific documents and manuscripts from #50 to #41. The introduction to the marketplace is the first part of the series and #40-31, #30-21, #20-11 and #10-1 will follow over consecutive days. Links to other parts of the series will be added here as they are published

50 – Autograph manuscript of Einstein’s first scientific essay

top-50-most-valuable-scientific-books-manuscripts-sold-auction-Einstein's first scientific paper, written at 16 years of age image

Price: US$676,369 (£344,000)

Estimate: £300,000 – £500,000

Created: circa 1894 – 1895

Significance: Einstein’s first scientific paper, written at 16 years of age (he is pictured above at 14), contains the seeds of the theory of relativity. It pursues an inquiry relating to the ether, the elastic substance which, according to the science of the day, filled all of space. It was Einstein’s continued interest in questions on the boundary between mechanics and electro-magnetics that provided the departure point for his 1905 special theory of relativity, which was to cause the final abandonment of the ether concept.

Some perspective on the price: Items to sell for a similar amount at auction include Marilyn Monroe’s baby grand piano ($662,500), Dorothy’s ruby slippers from the Wizard of Oz ($666,000), an Olympic Games Torch from the 1952 Helsinki games ($658,350), the jersey worn by American captain Mike Eruzione in the “The Miracle on Ice” Gold Medal Ice Hockey game at the 1980 Winter Olympics ($657,250), a Babe Ruth New York Yankees jersey ($657,250), a pocket watch given to Babe Ruth by the New York Yankees ($650,108), plus original art from an Incredible Hulk comic book ($657,250) and original art from a Spider-Man comic book ($657,250).

49 – Journey of Discovery to Port Phillip, New South Wales
by William Hovell and Hamilton Hume

top-50-most-valuable-scientific-books-manuscripts-sold-auction-Journey of Discovery to Port Phillip, New South Wales image

Price: $688,286 (AUD932,000)

Estimate: AUD750,000 – AUD850,000

Created: The overland exploration detailed in the book was undertaken in 1824 and 1825, the book was published in 1837, but this copy was one of a few printer’s proofs created in 1831.

Significance: The only unpublished proof copy in private hands of a landmark book about the exploration of Australia. Look closely at the map above (from a Sotheby’s auctioned copy of the second edition) and you’ll see that Port Phillip is the area upon which Australia’s second largest city, Melbourne, now sits. The auction copy was given to French navigator Louis de Freycinet (1779 – 1841), whose annotations to the text can be seen in the auction copy alongside those of its editor, convicted murderer and subsequently member of Parliament, Dr William Bland. Freycinet was the first person to publish a map showing the full coastline of Australia in 1811. The full text of the most expensive Australian book ever to sell at auction has been digitized and is available for free online via Project Gutenberg.

top-50-most-valuable-scientific-books-manuscripts-The world's most expensive movie poster sold for $690,000 sold-auction-image

The world’s most expensive movie poster sold for $690,000

Some perspective on price: In terms of items that have sold for a similar amount at auction, the world’s most expensive movie poster sold for $690,000 at a Reel Galleries auction in November, 2005. The poster is one of just four surviving from the epic 153–minute 1927 silent movie classic Metropolis, the story of a dystopian future set in the year 2000 and one of the first feature films to pioneer the science fiction genre. German artist Heinz Schulz-Neudamm (1899-1969) created the poster, the novel and screenplay were written by Thea Von Harbou (1888-1954), and the film was directed by Thea’s husband, Fritz Lang (1890-1976). You can watch the trailer for the remastered original movie here.

48 – The Principal Navigations, Voiages, Traffiques and Discoveries of the English Nation
top-50-most-valuable-scientific-books-manuscripts-The Principal Navigations sold-auction-image

Price: $743,687 (£458,500)

Estimate: £180,000 — £240,000

Created: The copy that achieved this price was of three volumes bound as two, dated 1598, 1599 and 1600 respectively. This copy is the first issue of the second edition with Volume One dated 1598. The first edition was published in 1589, with this copy of the second edition greatly expanded and including the very rare Wright-Molyneux world map.

Significance: Though Richard Hakluyt (1553 – 1616) never traveled further from England than France to assemble this work, he met or corresponded with many of the great explorers, navigators and cartographers, including Sir Francis Drake, Sir Walter Raleigh, Sir Humphrey Gilbert, Sir Martin Frobisher, Abraham Ortelius and Gerardus Mercator. In addition to long and significant descriptions of the Americas, this work also contains accounts of Russia, Scandinavia, the Mediterranean, Turkey, Middle East, Persia, India, South-East Asia and Africa. This copy includes an account of “the famous victorie atchieved at the citie of Cadiz” (by Sir Francis Drake), which was ordered to be suppressed in 1599, and therefore is sometimes missing in copies of this work.

The Wright-Molyneux map is based on Mercator’s projection, which Mercator expected would be a valuable tool to navigators, and this map was one of the first to use it. Unfortunately, Mercator gave no explanation as to the underlying mathematics used to create the map and it was left to Edward Wright to explain it in Certain Errors in Navigation Detected and Corrected (1599), hence the projection sometimes being called the Wright Projection by English mapmakers. The map is linked to Emery Molyneux, whose globe of 1592 provided most of the geographical information it contains. Hakluyt’s use of this map in his publication was to show “so much of the world as hath beene hetherto discouered, and is comme to our knowledge.” Wright later translated John Napier‘s pioneering 1614 work that introduced the idea of logarithms from Latin into English.

top-50-most-valuable-scientific-books-manuscripts-Two fascinating scientific instruments sold-auction-image

Some perspective on price: Two fascinating scientific instruments have also sold in this price range, being a Gilt and Brass Astronomical Table Clock (above left) made in Augsburg (Germany) circa 1560 – 70, which sold for $725,000 at a Christies (New York) auction in January, 2015, and a brass Astrolabe made by Muhammad ibn Ahmad al-Battûtî in Morocco, circa 1757, which sold for £421,250 ($729,021) at a Sotheby’s (London) auction in October, 2008.

47 – Les Voyages du Sieur de Champlain Xaintongeois
by Samuel de Champlain

top-50-most-valuable-scientific-books-manuscripts-Les Voyages du Sieur de Champlain Xaintongeois

Price: $758,000

Estimate: $250,000 – $350,000

Created: 1613

Significance: The renowned Siebert copy of this first edition landmark of French Americana and New World exploration, a pioneering work in ethnography and the first accurate mapping of the New England coast. One of the finest copies of this work extant, it previously sold in May, 1999 at a Sotheby’s New York auction for $360,000.

From the auction description: One of the most important works of the 17th century, remarkable in its content and execution, being the work of one man – a gifted naturalist, an artist (trained as a portrait painter in France), a skilled cartographer and sympathetic ethnographer. Samuel de Champlain’s account of his voyages of 1604, 1610, 1611 and 1613 are a key exploration narrative, one considerably enhanced by the author’s lively illustrations in which he records his mapping of a vast area with unprecedented detail and accuracy, while also depicting the flora and fauna of the New World. The vignettes within the rare Carte Geographique de la Nouvelle Franse are an artist’s rendition of new species, giving a hint of the varied and vast natural resources to be found in the New World. Of this monumental cartographic endeavor, Armstrong called the map, “not the work of a bureaucrat, but of a skillful pyschologist, promoter and politician…Champlain’s map of 1612 is the most important historical cartography of Canada.”

You can read the complete book (albeit in French) at Bibliothèque Nationale de France or see the main illustrations in detail at Canada’s McGill Bibliothèque.

top-50-most-valuable-scientific-books-manuscripts-1777 manuscript map of New York Island from the American Revolutionary War sold-auction-image

Some perspective on price: Interestingly, several other items of historical significance to the United States have sold for a similar amount at auction. These include a 1777 manuscript map of New York Island from the American Revolutionary War (above) that fetched $782,500, the original autograph manuscript by Julia Ward Howe of “The Battle Hymn of the Republic” that also fetched $782,500, W.I. Stone’s 1823 “50th Anniversary” engraving of the ‘Declaration of Independence’ that also sold for $782,500, and a draft manuscript of the United Kingdom’s Stamp Act of 1765 (an effort to heavily tax the colonies and a catalyst for the American Revolution), that sold for $755,000.

46 – The Decades of the Newe Worlde
by Pietro Martire d’Anghiera

top-50-most-valuable-scientific-books-manuscripts-The Decades of the Newe Worlde sold-auction-image

Price: $768,000

Estimate: $80,000 – $120,000

Created: Published 1555 but translated from works in other languages produced over the previous 75 years.

Significance: The full title of this book is The Decades of the newe worlde or west India, Conteyning the nauigations and conquestes of the Spanyardes, with the particular description of the moste ryche and large landes and Ilandes lately founde in the west Ocean perteynyng to the inheritance of the kinges of Spayne.

It is the first series of narratives on epic voyages voyages based on the first three Decades of Peter Martyr (Pietro Martire d’Anghiera – read the text in English here), which were originally written in Latin between 1511 and 1530. The book was edited and translated into English by Richard Eden and published in London by William Powell in 1555. The auctioned book sold for almost 10 times its estimate, mainly due to its significance as the first edition of the first collection of voyages printed in English, and the first work to contain narratives of English voyages.

Besides the three Decades of Peter Martyre, it contains a translation of that author’s “De nuper sub D. Carolo repertis Insulis” (describing the voyages of Francisco Hernández de Córdoba, Juan de Grijalva, and Hernán Cortés), the Bull of Pope Alexander (by which he decreed that the world was to be divided between Spain and Portugal), as well as translations of the most important parts of the works pertaining to the maritime discovery of the New World by Oviedo, Maximilian of Transylvania, Vespuccius, Gomara and others.

This book is quite a compendium of important work, as it also contains the first printed English treatise on the compass, the first description of “What degrees are,” and “A demonstration of the roundness of the Earth.”

In the book’s preface, the colonization of North America by the English is advocated for the first time and according to The art of navigation in England in Elizabethan and early Stuart times, “for over a quarter of a century it proved to be the English source-book of geographical and navigational knowledge” and “as such it was to be of the utmost value to men like Hawkins and Drake.”

Emphasizing this last point is the book’s provenance – this book was Roger North‘s copy. In 1617, North had sailed with Sir Walter Raleigh in his second expedition to Guiana in South America in search of the mythical “city of gold” known as El Dorado, and in 1620, North was a prime mover behind attempts to establish an English colony on the River Amazon delta. The book bears his signature on the title as well as his motto, “Durum Pati,” believed to be an abbreviation of Horace’s “Durum, sed levius fit patientia…” (‘Tis hard! But that which we are not permitted to correct is rendered lighter by patience). The book is available in full on the Internet Archive.

Some perspective on price: The baseball hit by Barry Bonds for career home run #756, (breaking the all-time home run record for the American MLB), sold for $752,467 at an SCP auction in 2007.

45 – The Atlantic Neptune published for the use of the Royal Navy of Great Britain by Joseph Des Barres

top-50-most-valuable-scientific-books-manuscripts-Swiss cartographer Joseph Frederick Wallet Des Barres (1722-1824) sold-auction-image

Price: $779,200

Estimate: $400,000 – $600,000

Created: 1774-1779

Significance: Swiss cartographer Joseph Frederick Wallet Des Barres (1722-1824) was a member of the famous Huguenot family who studied mathematics under Daniel Bernoulli at the University of Basel, then military surveying at Great Britain’s Royal Military Academy, leading to a commission in 1756 into the Royal Americans and a role as a cartographer in the Seven Years’ War. Using documents captured at Louisbourg, Des Barres compiled a large-scale chart of the St. Lawrence River and Gulf, which enabled the British Navy to navigate its warships to and take control of the French capital at Quebec. The victory demonstrated the benefits of accurate marine surveys, and Des Barres’ capability in particular, resulting in the admiralty providing him with the resources to accurately chart the coast of Atlantic Canada, and the eastern seaboard from New England to the West Indies. This book resulted some 17 years later: a maritime atlas that set the standard for nautical charting for half a century.

Some perspective on price: Several copies of this work have achieved similar high figures, and it is clear that both mariners and historians considered it to be “the most splendid collection of charts, plans and views ever published.”

44 – Atlas Sive Cosmographicae Meditationes De Fabrica Mundi Et Fabricati Figura by Gerard Mercator

top-50-most-valuable-scientific-books-manuscripts-The first atlas to be so called sold-auction-image

Price: $783,346 (£422,400)

Estimate: £60,000 — £80,000

Created: 1595

Significance: The first atlas to be so called. The first four parts had been published between 1585 and 1589 (see previous lot). To these were added a fifth and final part, Atlantis pars altera, published in 1595, a year after Mercator’s death, and overseen by his son Rumold. This part includes maps of the world and the continents. The complete atlas was dedicated to Queen Elizabeth and the whole was preceded by the famous engraved general title-page showing Atlas measuring the world with a pair of dividers. Interestingly, Mercator refers to Atlas, King of Mauretania (now Morocco), a mathematician and philosopher who is generally credited with having made the first celestial globe, not the mythical Greek god Atlas, whose punishment was to carry the world and heavens on his shoulders. We humans certainly have a propensity to get our stories mixed up.

43 – Viviparous Quadrupeds of North America
by John James Audubon

top-50-most-valuable-scientific-books-manuscripts-John James Audubon's second masterpiece. Viviparous sold-auction-image

Price: $793,000

Estimate: $600,000 – $700,000

Created: 1845 – 1854

Significance: The most expensive of numerous copies of John James Audubon’s second masterpiece. “Viviparous” means birthing young from within the body, so this book is essentially a study of North American mammalian wildlife, and like Audubon’s best known “Birds of America,” each is superbly illustrated in its natural habitat. Equally as as impressive and sweeping as his ornithological work, the “Viviparous Quadrupeds of North America” is the result of the artist/naturalist’s years of field research, travel, and seemingly endless study and is the outstanding work on American animals produced in the 19th-century. The entire book has been digitized by the University of Michigan’s Special Collections Library and is available in high resolution for free download and use, with attribution.

42 – Globus Mundi

top-50-most-valuable-scientific-books-manuscripts-Globus Mundi sold-auction-image

Price: $837,227 (€600,000)

Estimate: €500,000

Created: 1509

Significance:Globus Mundi” does not list an author, but is considered so valuable because it is the first book on cosmography to officially use the term America as the common name to describe the “New World.” It was published in Strasbourg (Germany) in 1509 by Johann Grüninger.

top-50-most-valuable-scientific-books-manuscripts-An astrolabe made for the Duke of Parma by Erasmus Habermel sold-auction-image

Some perspective on price: An astrolabe made for the Duke of Parma by Erasmus Habermel sold for $841,070 (£540,500) at a Christies (London) auction in October, 1995.

41 – Aves Ad Vivum Depictae A Petro Holysten Celeberrimo Picture by Pieter Holsteyn the Younger

top-50-most-valuable-scientific-books-manuscripts-Pieter Holsteyn II (1614 – 1673) sold-auction-image

Price: $850,0000

Estimate: $300,000 – $500,000

Created: circa 1638

Significance: Pieter Holsteyn II (1614 – 1673) worked closely with his father, Pieter Holsteyn the Elder, in producing fine gouaches and watercolor natural history portraits and botanicals and grew to become one of the Dutch Golden Age watercolor masters. His particular skill was the delicate, but detailed depiction of many of the new and exotic species being returned to Amsterdam from the voyages of the Dutch East India Company. This particular book is extremely rare as most of the natural history albums produced in the 17th century have long since been broken apart and the images sold piecemeal. The book is renowned for its famous illustration of the now extinct White Dodo.

Some perspective on price: A similar collection is for sale at Arader Galleries in New York at a price of $4.5 million.

Continue reading in the third part of the series, numbers 40-31.



Henry Sapiecha



From the rare scribblings of Alan Turing through to the genius of Newton, Einstein and Madame Curie, we continue to navigate our way through the fascinating list of the 50 most valuable scientific documents of all-time.

This is a representation of what is to come in the series of the 50 very important scientific documents.

Next postings will be detailing what the docs are.So watch for them here.

Collectibles Feature

The most valuable scientific documents of all-time #20-11

Collectibles Feature

The most valuable scientific documents of all-time #30-21

The most valuable scientific documents of all-time numbers #40-31
Collectibles Feature

The most valuable scientific documents of all-time #40-31

An analysis of the world's most valuable scientific documents and manuscripts, and it illustrates both how ...
Collectibles Feature

The most valuable scientific documents of all-time #50-41

An analysis of the world's most valuable scientific documents and manuscripts, and it illustrates both how ...
Collectibles Feature

The world’s most valuable scientific books and manuscripts – an overview of the marketplace


Henry Sapiecha

upsalite-sample image

In an effort to create a more viable material for drug delivery, a team of researchers has accidentally created an entirely new material thought for more than 100 years to be impossible to make. Upsalite is a new form of non-toxic magnesium carbonate with an extremely porous surface area which allows it to absorb more moisture at low humidities than any other known material. “The total area of the pore walls of one gram of material would cover 800 square meters (8611 sq ft) if you would ‘roll them out'”, Maria Strømme, Professor of Nanotechnology at the Uppsala University, Sweden tells Gizmag. That’s roughly equal to the sail area of a megayacht. Aside from using substantially less energy to create drier environments for producing electronics, batteries and pharmaceuticals, Upsalite could also be used to clean up oil spills, toxic waste and residues.

upsalite-1-image www.sciencearticlesonline.comupsalite-2-image

upsalite-3-image www.sciencearticlesonline.comupsalite-4-image

Scientists have long puzzled over this particular form of magnesium carbonate since it doesn’t normally occur in nature and has defied synthesis in laboratories. Until now, its properties have remained a mystery. Strømme confesses that they didn’t actually set out to create it. “We were really into making a porous calcium carbonate for drug delivery purposes and wanted to try to make a similarly porous magnesium carbonate since we knew that magnesium carbonate was non-toxic and already approved for drug delivery,” she tells us. “We tried to use the same process as with the calcium carbonate, totally unaware of the fact that researchers had tried to make disordered magnesium carbonates for many decades using this route without succeeding.”

The breakthrough came when they tweaked the process a little and accidentally left the material in the reaction chamber over a weekend. On their return they found a new gel in place. “We realized that the material we had made was one that had been claimed impossible to make,” Strømme adds. A year spent refining the process gave them Upsalite.

While creating a theoretical material sounds like cause for celebration, Strømme says the major scientific breakthrough is to be found in its amazing properties. No other known carbonate has a surface area as large as 800 sq m per gram. Though scientists have created many new high surface area materials with nanotechnology, such as carbon nanotubes and zeolites, what makes Upsalite special is the minuteness of its nanopores.

Each nanopore is less than 10 nanometers in diameter which results in one gram of the material having a whopping 26 trillion nanopores. “If a material has many small pores,” explains Strømme, “it gives the material a very large surface area per gram, which gives the material many reaction sites, i.e. sites that can react with the environment, with specific chemicals, or in the case of Upsalite, with moisture.”

Upsalite’s moisture absorption properties are striking. It was found to absorb 20 times more moisture than fumed silica, a material used for cat box fillers and as an anti-caking agent for moisture control during the transport of moisture sensitive goods. This means that you’d need 20 times less material to do the moisture control job.

Its unique pore structure also opens up new applications in drug delivery. The pores can host drugs that need protection from the environment before being delivered to the human body. It’s also useful in thermal insulation, drying residues from oil and gas industries, and as a dessicant for humidity control. Potential applications are still being discovered as the material undergoes development for industrial use.

The team at Uppsala University is commercializing Upsalite through their spin-off company Disruptive Materials. An article describing the material and its properties can be found at PLOS ONE.

Source: Disruptive Materials


Henry Sapiecha

Senior geneticists and bio-ethicists have agreed with the US spy chief’s claim that genetic engineering could be a serious threat if put to nefarious ends

Gene editing has been made possible by rapid advances in technology-image

Gene editing has been made possible by rapid advances in technology.

A senior geneticist and a bioethicist warned on Friday that they fear “rogue scientists” operating outside the bounds of law, and agreed with a US intelligence chief’s assertion this week that gene editing technology could have huge, and potentially dangerous, consequences.
Top biologists debate ban on gene-editing
Read more

“I’m very, very concerned about this whole notion of there being rogue clinics doing these things,” geneticist Robin Lovell-Badge told reporters at the American Association for the Advancement of Science (AAAS) conference in Washington DC, referring to the unregulated work of gene scientists. “It really scares me, it’s bad for the field.”

Recent advances in genetics allow scientists to edit DNA quickly and accurately, making research into diseases, such as cystic fibrosis and cancer, easier than ever before. But researchers increasingly caution that they have to work with extreme care, for fear that gene editing could be deployed as bioterrorism or, in a more likely scenario, result in an accident that could make humans more susceptible to diseases rather than less.

Earlier this week the US director of national intelligence, James Clapper, testified before the Senate as part of his worldwide threat assessment report that he considers gene editing one of the six potential weapons of mass destruction that are major threats facing the country, alongside the nuclear prospects of Iran, North Korea and China.

Bioethicist Francoise Baylis, who also spoke at AAAS and who took part in the international summit that debated gene editing last year, said the technology behind gene editing could be dangerous on a global or individual level.

“I think bioterrorism is a reality, and a risk factor we need to take into consideration,” she said. “It’s like any dual-use technology that can be used for good or evil.”

The Dalhousie University professor compared the advances in technology, particularly a tool called Crispr-Cas9, to a hammer in the hands of good and bad actors alike. “It can be the murder weapon, it can be the gavel the judge uses,” she said. “So I don’t know that there’s any way to sort of control that.”

Since its discovery, Crispr-Cas9 has revolutionized gene editing by helping scientists target certain genes with an unprecedented degree of speed and accuracy. The bacteria-originated tool has sparked a patent war among a handful of scientists, and a new industry worth billions.

In the US, members of the intelligence community agreed that gene editing represents a largely open field. Clapper’s report to the Senate cited the easy access, rapid development and weak regulation abroad in its argument that the “deliberate or unintentional misuse” of gene editing technology “might lead to far-reaching economic and national security implications”.

Daniel Gerstein, a former under-secretary at the Department of Homeland Security, said: “It’s interesting that we have something that is clearly a technology that was designed for legitimate biotechnology research which has been associated in this way with weapons of mass destruction.”

But the prospects are simultaneously magnificent and alarming, said Columbia University bioethicist Robert Klitzman, who was happy to see gene editing on the list.

“I think that this is a very powerful technology,” Klitzman said. “I think as a result that there are things that need to be done that have not yet been talked about.”

Research and technology is growing so fast that it is easy to imagine Crispr used for nefarious ends – or as the enabler of a catastrophic accident, said Klitzman.

“The infectious agent responsible for bubonic plague, if altered through Crispr,” he said, “could potentially be used as a WMD. Currently, we have effective treatment against it. But if it were altered, it could potentially become resistant to these treatments and thus be deadly.”

Setting standards on who can buy the technology and using discretion when publishing scientific research could be key, he said. “Just like guns, you need some kind of security check.”

But regulating gene editing would be like trying to govern how people use fire, said Michael Wiles, a senior director at the Jackson Lab in Maine, a leader in growing genetically modified mice for research.

“Every technology has two edges,” Wiles said. “It’s a disturbing but real concept with humans … you can’t control it.”

While intentional abuse of gene editing is ringing alarm bells, some at AAAS were more wary of accidental adverse consequences from reckless gene editing. Lovell-Badge said he particularly fears the kind of work that might go on in labs or fertility clinics where work on human embryos is performed carelessly and without oversight. Such labs, he said, have “popped up in many countries, including the US”, with “no real basis in science or fact, and may be dangerous in some cases”.

Some of these labs might alter particular genes to create so-called “designer babies”, with tailored features that range from height and eye color to disease immunity. But turning a given gene on or off could also affect the genes around it. For example, giving a baby immunity to one disease could mean it’s now vulnerable to other diseases or infections.

Baylis maintained that genetic enhancements of humans are inevitable, even if she could not say what they will be. But she said that unregulated modifications could exacerbate inequality and create “a new eugenics, a different kind of eugenics”.

Other scientists disagreed – on both sides of the debate. Sarah Chan, a University of Edinburgh bioethicist, said fears of inequality are “definitely overblown”, and that “designer babies” are not inevitable. She added that technology that could make diseases more infectious and dangerous has existed for decades, as have the questions around it.

“Some of the fears and concerns surrounding genome editing technology are, if not overblown, perhaps misdirected.”

Taking the contrary opinion, geneticist Robert Winston said: “Regulation cannot prevent this from happening either in the UK eventually or much more likely elsewhere.

“With the power of the market and the open information published in journals,” Winston said, “I am sure that humans will want to try to ‘enhance’ their children and will be prepared to pay large sums to do so.”


Henry Sapiecha


migrant scientist image

From 2003 to 2013, the number of scientists and engineers residing in the United States rose from 21.6 million to 29 million. This 10-year increase included significant growth in the number of immigrant scientists and engineers, from 3.4 million to 5.2 million.

Immigrants went from making up 16 percent of the science and engineering workforce to 18 percent, according to a report from the National Science Foundation’s National Center for Science and Engineering Statistics (NCSES). In 2013, the latest year for which numbers are available, 63 percent of U.S. immigrant scientists and engineers were naturalized citizens, while 22 percent were permanent residents and 15 percent were temporary visa holders.

Of the immigrant scientists and engineers in the United States in 2013:

  • 57 percent were born in Asia.
  • 20 percent were born in North America (excluding the United States), Central America, the Caribbean, or South America.
  • 16 percent were born in Europe.
  • 6 percent were born in Africa.
  • And less than 1 percent were born in Oceania.

Among Asian countries, India continued its trend of being the top country of birth for immigrant scientists and engineers, with 950,000 out of Asia’s total 2.96 million. India’s 2013 figure represented an 85 percent increase from 2003.

Also since 2003, the number of scientists and engineers from the Philippines increased 53 percent and the number from China, including Hong Kong and Macau, increased 34 percent.

The NCSES report found that immigrant scientists and engineers were more likely to have earned post-baccalaureate degrees than their U.S.-born counterparts. In 2013, 32 percent of immigrant scientists reported their highest degree was a master’s (compared to 29 percent of U.S.-born counterparts) and 9 percent reported it was a doctorate (compared to 4 percent of U.S.-born counterparts). The most common fields of study for immigrant scientist and engineers in 2013 were engineering, computer and mathematical sciences and social and related sciences.

Over 80 percent of immigrant scientists and engineers were employed in 2013, the same percentage as their U.S.-born counterparts. Among the immigrants in the science and engineering workforce, the largest share (18 percent) worked in computer and mathematical sciences, while the second-largest share (8 percent) worked in engineering. Three occupations — life scientist, computer and mathematics scientist and social and related scientist — saw substantial immigrant employment growth from 2003 to 2013.


Henry Sapiecha

Albert-Einstein-at blackboard image

Rumors are rippling through the science world that physicists may have detected gravitational waves, a key element of Einstein’s theory which if confirmed would be one of the biggest discoveries of our time.

There has been no announcement, no peer review or publication of the findings—all typically important steps in the process of releasing reliable and verifiable scientific research.

Instead, a message on Twitter from an Arizona State University cosmologist, Lawrence Krauss, has sparked a firestorm of speculation and excitement.

Krauss does not work with the Advanced Laser Interferometer Gravitational Wave Observatory, or LIGO, which is searching for ripples in the fabric of space and .

But he tweeted on Monday about the apparent shoring up of rumor he’d heard some months ago, that LIGO scientists were writing up a paper on gravitational waves they had discovered using US-based detectors.

“My earlier rumor about LIGO has been confirmed by independent sources. Stay tuned! Gravitational waves may have been discovered!! Exciting,” Krauss tweeted.

His message has since between retweeted 1,800 times.

If gravitational waves have been spotted, it would confirm a final missing piece in what Albert Einstein predicted a century ago in his theory of general relativity.

The discovery would open a new window on the universe by showing scientists for the first time that  exist, in places such as the edge of black holes at the beginning of time, filling in a major gap in our understanding of how the universe was born.

A team of scientists on a project called BICEP2 (Background Imaging of Cosmic Extragalactic Polarization) announced in 2014 that they had discovered these very ripples in space time, but soon admitted that their findings may have been just galactic dust.

A spokeswoman for the LIGO collaboration, Gabriela Gonzalez, was quoted in The Guardian as saying there is no announcement for now.

“The LIGO instruments are still taking data today, and it takes us time to analyze, interpret and review results, so we don’t have any results to share yet,” said Gonzalez, professor of physics and astronomy at Louisiana State University.

“We take pride in reviewing our results carefully before submitting them for publication—and for important results, we plan to ask for our papers to be peer-reviewed before we announce the results—that takes time too!”

Other observers pointed out that any supposed detection may be a simple practice run for the science teams, not a real discovery.

“Caveat earlier mentioned: they have engineering runs with blind signals inserted that mimic discoveries. Am told this isn’t one,” Krauss tweeted.

But science enthusiasts may have to wait awhile longer to get all the details.

The LIGO team’s first run of data ends Tuesday, January 12.

“We expect to have news on the run results in the next few months,” Gonzalez was quoted as saying by New Scientist magazine.


Henry Sapiecha

2015 was an amazing year for science, but it was also a year for some amazingly overhyped science.

We put our hearts ahead of our data when speculating about advanced extraterrestrial civilisations. We so wanted to believe that a looming ice age would save us from global warming. And we were horrified to learn that the internet’s favourite meat product might cause cancer, along with everything else in the goddamn universe. Here are the most overhyped scientific discoveries of 2015, in all their glory.

The so-called alien megastructure

The so-called alien megastructure globe image

It isn’t an overhyped scientific discoveries list without some wild speculation about extraterrestrials, and 2015 did not disappoint. If you weren’t familiar with the term “alien megastructure” before, you certainly are now.

The alien hullabaloo started in early October, when astronomers announced the discovery of KIC 8462852, a weird star in the Kepler database that flickers aperiodically, its brightness sometimes dropping by as much as 20 per cent. It’s certainly not a transiting planet, but it doesn’t look like anything else we’ve seen, either. Still, nobody outside of the astro community would have given a rat’s arse about the cosmic oddity if SETI researchers hadn’t made this humble suggestion: Perhaps the star was being occluded by a giant, alien construction project, a la Dyson sphere.

The citizens of planet Earth worked themselves into a rabid frenzy over the idea, to the point that Neil deGrasse Tyson had to go on late night TV and tell us all to calm the hell down. SETI astronomers capitalised on the momentum, mobilising state-of-the-art observatories to scour KIC 8462852’s cosmic neighbourhood for the radio signals and laser pulses that would lend credence to the wild idea. They found not a single fingerprint.

The latest thinking is that KIC 8462852 is probably being occluded by a swarm of comets — BORING — but I’m personally holding out hope that somebody follows up on the giant space walrus idea.

[Image: Artist’s representation of a Dyson sphere, crumbling like the alien megastructure hypothesis, via Danielle Futselaar/SETI International]

Bacon cancer

fried bacon image

In October, the world was confronted with some rather unsettling news: bacon, along with other processed meats including hot dogs and ham, is carcinogenic, according to a new scientific paper which evaluated over 800 studies for links between processed or red meat intake and cancer. Unfortunately, many media reports took the “bacon cancer” soundbite and ran with it, leaving readers to imagine that consuming bacon is similar to touching nuclear waste. It’s not.

There are a few reasons we shouldn’t panic about this revelation, as Gizmodo’s George Dvorsky lays out in detail. First and foremost, while the new study did find a real statistical correlation between processed meat consumption and bowel cancer, many subsequent reports failed to identify the magnitude of risk. That turns out to be fairly small. As you might expect, it increases slightly with the amount of processed meat consumed.

To make matters even more confusing, because processed meat is now classified as a Group 1 carcinogen, some articles suggested eating bacon is as bad as smoking cigarettes or asbestos exposure — other Group 1 carcinogens. But again, the Group 1 label has nothing to do with risk magnitude, only the strength of scientific evidence linking a substance to cancer. About 34,000 cancer deaths each year are associated with a diet high in processed meat. Smoking, on the other hand, leads to about a million deaths a year.

If there’s a takeaway in all of this, it’s that it’s probably a good idea to limit your consumption of processed meat — health professionals have been suggesting this for years anyway — and to always be sceptical when reading about new linkages between certain foods and cancer. Because really, when you get down to it, pretty much anything can cause cancer.

Warp drive?!?!?!

Warp drive!!!image

It was in 2014 that we first heard whispers of NASA’s EM Drive, an “impossible” engine that could (in theory) accelerate objects (our future spacecraft) to near relativistic speeds without the use of any propellant, simply by bouncing microwaves around a waveguide. The laboratory “evidence” for the physics-defying engine might have been nothing more than analytical error — or, as one expert put it, bullshit — but that didn’t stop people from continuing to scour NASA engineering forums for additional affirmation of the science fictional technology in 2015.

em drive motor image

Lo and behold, the sleuths of the internet found some. Apparently, the engineers working on the EM Drive decided to address some of the sceptic’s concerns head-on this year, by re-running their experiments in a closed vacuum to ensure the thrust they were measuring wasn’t caused by environmental noise. And it so happens, new EM Drive tests in noise-free conditions failed to falsify the original results. That is, the researchers had apparently produced a minuscule amount of thrust without any propellant.

Once again, media reports made it sound like NASA was on the brink of unveiling an intergalactic transport system.

The real problem with the EM drive isn’t the scientists. It isn’t even the science. The problem is that a) NASA hasn’t claimed that the system works; b) there have been no peer-reviewed papers on the subject; and c) as far as we can tell, all evidence for the physics-defying machine comes from a handful of short-term experiments. This is a story of scientists caught in the act of tinkering by people who want Star Trek to happen now.

[Top image via Star Trek Wiki. EM Drive prototype image via NASA Spaceflight Forum]

An ice age in 2030?

cops on ice skates image

You know what would really save us from this global warming mess we’ve gotten ourselves into? An ice age! And earlier this year, it seemed like our prayers were answered, when a new astronomy study suggested that the sun is heading for a period of extremely low solar output — a so called ‘Maunder minimum.’ A press release accompanying the study explained that predictions from the astronomers’ new models “suggest that solar activity will fall by 60 per cent during the 2030s to conditions last seen during the ‘mini ice age’ that began in 1645.”

This led to some confusion.

Even if it’s true that the sun’s output is on the verge of declining to levels not seen in over 350 years — and the likelihood of that varies greatly from study to study — it’s misleading to say we’re on the brink of an ice age. The Little Ice Age saw temperatures drop by about 1º C, whereas real ice ages are characterised by global average temperatures 5º C cooler than today.

It’s also misleading to insinuate that the 17th century Maunder minimum even caused the Little Ice Age. As astronomer Jim Wild explained earlier this year, the Little Ice Age began over a century before the start of the Maunder minimum and continued long after it was over. People still aren’t sure what led to the cold snap — the leading suspect is currently volcanic activity — or if it was even a global phenomenon.

Finally, the overwhelming consensus of the world’s climate scientists is that the influence of solar variability on climate is dwarfed by the impact of increased CO2 in the atmosphere. Indeed, many calculations suggest that a “grand solar minimum” would at best offset a few years’ worth of the warming that’s being caused by human carbon emissions.

Simply put, we cannot bank on the vagaries of the sun to save our collective arses this century.

[Image: London policemen on ice skates on the frozen River Thames circa 1900, via Getty]

The tardigrade’s seriously weird genome

Tardigrades — those weird, wonderful, microscopic poncho bears that're virtually indestructible image

Tardigrades — those weird, wonderful, microscopic poncho bears that’re virtually indestructible — got even weirder this year, when researchers at the University of North Carolina Chapel Hill decided to sequence the tardigrade genome. Astonishingly, the team discovered that a full sixth of the animal’s DNA was not animal DNA at all: it was from plants, fungi, bacteria, and viruses. Nobody had ever seen anything like it before, which in hindsight, maybe should have been a red flag.

As Annalee Newitz explained last month, the authors suggested the tardigrade’s patchwork genetic code was acquired via horizontal gene transfer, and that this could be related to the animal’s unique stress response:

“When tardigrades are desiccated, their DNA breaks into pieces. Any organisms around them will also suffer the same fate. But when water returns to the tardigrade’s environment, they re-hydrate and return to life. As they re-hydrate, their cell walls become porous and leaky, and fragments of DNA from the desiccated organisms around them can flow inside and merge with the animal’s rejuvenating DNA.”

Furthermore, the UNC authors speculated that the tardigrade’s borrowed genes may help the animal withstand everything from boiling water to the vacuum of space. It’s a fascinating story about an amazing organism, so it’s no surprise the paper got a lot of pickup. But it’s not at all clear that the conclusions are sound.

Indeed, less than one week after the UNC Chapel Hill version of the tardigrade genome was published in PNAS, another lab at the University of Edinburgh posted a pre-print of their tardigrade genome analysis, which painted an entirely different picture. Edinburgh researchers found very little evidence for horizontal gene transfer — as few as 36 genes, compared with the 6600 reported by UNC Chapel Hill.

How could this be? One possibility is that many of the sequences the UNC team called bonafide tardigrade genes were, in fact, microbial contamination. As science journalist Ed Yong explains over at The Atlantic, the Edinburgh team carefully cleaned up their data to remove many sequences that were only present in trace quantities, which the scientists presumed to be contaminants. “I want to believe that massive HGT happened, because it would be an awesome story,” Mark Baltrus, lead author of the Edinburgh study told The Atlantic. “But the problem is that extraordinary claims require extraordinary evidence.”

On the bright side, what could have become a bitter dispute between rival labs turned into a fruitful collaboration: the two teams are now sharing their data in an attempt to reconcile their disparate findings.

Science is a messy, error-fraught business — and if we think we’re doing it all right the first time, chances are we’re wrong.

[Image via Sinclair Stammers]


Henry Sapiecha