It was a restructuring year in certain ways, as emerging technologies for the enterprise gradually moved forward but didn’t result in as many new targets to track as last year. Yet it’s also abundantly clear the largest digital shifts by far are still ahead of us. Here’s how the year ahead is shaping up.

Enterprise IT in 2017 continues to be highly in tune with consumer technology, but for a change this year we can see a concerted push to shape business-ready options of emerging tech in sizzling new categories. This is especially the case in arenas like blockchain, digital twins, marketing integration solutions, and digital transformation target platforms.

Not too many items fell off this year’s enterprise tech to watch list either, as organizations continue to struggle to adopt the growing raft of relevant new technologies that have steadily arrived on the scene recently.

Consequently, the portfolio of emerging tech that must be managed continues to grow quickly, even as IT spending — and overall tech absorption capacity — is increasing only in the low single digits. This is an untenable proposition that’s putting more and more IT organizations under stress. Most significantly, this is creating service backlogs that are pushing “edge IT” implementation and acquisition — or systems that are not considered mission critical enterprise-wide — out into lines of business for realization as they see fit, as well as fueling so-called shadow IT projects at the departmental level.

As a result, I currently find that IT organizations are seeking novel ways to learn about and adopt emerging tech to ride the exponential curve of digital change. That’s a whole separate conversation however, but one that is becoming urgent as we see the CIO under significant and steady pressure to deliver far more quickly in 2017.

For this year’s list of enterprise tech to watch, I’ve attempted to synthesize data from all available sources, particularly industry trend data. In general, I prefer to include technologies on the list that are expected to grow in the double digits every year for a half decade or more from now. However, I’ll sometimes include important new technology categories if they clearly warrant it based on early importance, even if forecasts aren’t readily available.

As a result, we’ve seen the steady shift from SMAC (social, mobile, analytics, cloud) that dominated this list at its inception to one that is more focused on artificial intelligence, Internet of Things, distributed ledgers, immersive digital experiences (AR/VR), edge computing, low code tools, and much more.

That’s not to say that essentially mainstream technology bases like public cloud, cybersecurity, or big data are staid and therefore are about to come off the list. In fact, they are shifting and evolving more now than ever before and should remain at the top of the technologies that most enterprises should be watching very closely today.

Based on my analysis then, here is the short list of enterprise technologies that organizations should be tracking for building skills, assessing their strategic and tactical impact, experimenting with, and subsequently preparing for wider-scale adoption, often as part of a more systematic program of digital transformation.

As in previous years, I’ve also included a horizons list in this year’s tech to watch, which shows technologies that are almost certainly going to be significant in coming years, but should now be relegated primarily to tracking and monitoring, unless it’s impactful for your core business in the near term.

The enterprise technologies to watch in 2017

In roughly clockwise order, here’s the breakdown, with a brief note on why each enterprise technology to watch this year is significant, with data on its outlook if available:

    • Machine learning.

    • Separating out the topic of machine learning from artificial intelligence is still a tricky task. However, I categorize machine learning as the ability for systems to learn from data in an unsupervised manner and with minimal guidance, while artificial intelligence represents systems that can improve themselves through more abstract reasoning not necessarily dependent on data. It’s tougher to tease apart the forecasts for the two, as they are often lumped together, however one leading report this year, citing the expectation that use of machine learning will become common to support activities in the workplace (a sentiment I very much concur with), expects 43 percent annual growth of the category, which will reach $3.7 billion in revenue by 2021.
    • Contextual computing.
    • The increasing desire to augment productivity and collaboration by supplying information on-demand, usually just as it’s needed and before it’s explicitly asked for, has already become big business. Established industry players such as Apple, Intel, and Nokia are working on and/or offering context-aware APIs already, while a raft of startups is competing to make the early market. Contextual computing is now expected to grow 30 percent annually and reach a market size of a whopping $125 billion by 2023, largely due to widespread use in consumer mobile devices and smart agents.
    • Virtual reality.
    • Still a niche technology despite the support from major industry players such as Samsung with its inexpensive yet high quality Gear VR set and Apple with its new ARKit, virtual reality is poised to grow dramatically for a growing percent of end-user experience as the technology becomes more refined, and especially, less bulky and intrusive. While just a small half billion dollar market today, virtual reality is expected to grow at the blistering pace of 133 percent a year on average, becoming a $35 billion industry by 2021.
    • 3D and 4D printing.
    • While evolving in fits and starts, 3D printing has already become important to a wide range of industries, from aerospace and energy to electronics and even culinary uses. 3D printing is remaking the logistics industry as well by moving manufacturing directly to the point of use and making it on-demand. 3D printing will become a significant industry quite soon, growing by 26 percent annually through 2023 and becoming a $33 billion market. 4D printing, which are objects whose shapes can be changed over time, is a much smaller industry but as a result is growing quickly at 39 percent a year through 2022, where it will likely be a $100 million plus market.
    • 5G wireless.
    • Few mobile technologies are as anticipated as 5G, the next generation of wireless telecommunications standards and infrastructure, which will bring revolutionary bandwidth increases (potentially up to 1GB per second in some cases) and enable new high value business scenarios including immersive virtual reality telepresence, 4K/8K video streaming, and other very high bandwidth uses. While still not expected to commercialize until at least 2019, 5G is widely expected to impact numerous industries and markets, despite real challenges in beaming millimeter waves significant distances, fueling futuristic experimental 5G projects like Google Skybender. 5G spending is expected to grow by 70 percent a year and reach at least $28 billion a year in revenue by 2025.
    • Real-time stream processing and analytics.
    • Best exemplified by software and cloud services like Apache Spark and Amazon Kinesis, the Internet of Things revolution and rich media in general is fueling the need to process and analyze massive amounts of data without any delay — both event metadata and the data itself — that flows in from services and devices on the edge of the network. While not quite the white hot item it was last year, stream processing remains a critical technology for data-driven companies. Stream processing and analytics is expected to grow 33 percent a year through 2025.
    • Wearable IT.
    • The market for enterprise wearables remains quite small and is still limited to niche applications like corporate wellness, hands-free scenarios, situational customer/workforce experiences (typically location-based or rapid notifications), and just-in-time decision making. Yet this belies the anticipated growth of the category, which is expected to expand by over 75 percent a year and become an industry $12 billion in size by 2021.
    • Mobile payments.
    • With Apple Pay’s steady expansion, the rise of Samsung Pay, and the use of mobile devices for payments across the developed and developing world, the smart device is rapidly becoming the wallet of the future. Enterprises must become ready to access these revenue streams and watch the evolution of the industry closely, as revenue flows move to digital channels not controlled as much by traditional financial institutions. Mobile payments are currently expected to grow by 20 percent a year globally and become a $1.7 trillion dollar industry by 2022.
    • Containers.
    • Most well known by their effective bringing to market and popularization by Docker, containers remain a leading on ramp and direct pathway to both cloud computing as well as a more modern and effective model for the design, management, governance, and optimization of IT applications. Considered a contemporary method to architect and operate cloud software today, containers are on the short list of models most organizations are seriously considering for go-forward application models, either buy or build. The growth picture tells the story here with a 40 percent annual growth rate and $2.6 billion market industry size for container-as-a-service by 2020.
    • Mobile business apps.
    • Stubbornly one of the most challenging aspects of enterprise IT, good mobile apps for both internal and external customers remains a challenge for the average organization, yet is critical for the success of their digital experiences. Why are mobile apps so hard? A confluence of reasons: The two main mobile platforms (iOS and Android) are large and complex, and they are still fairly unfamiliar to most of IT, while mobile application management issues, the proliferation of devices and form factors, and security issues round out the barriers to good mobile apps. Mobile apps are expected to grow by 14 percent a year and reach $100 billion in revenue yearly by 2022, yet the enterprise component of that is likely to remain small. Leading organizations can seize opportunity in 2017 and beyond with first mover advantage by providing the mobile experiences their stakeholders want.
    • On-demand, as-a-service, and software-defined everything.
    • In short, everything in the IT business — from security, storage, networking, computing, and applications — is become software-defined and packaged as an on-demand service. While this is nothing new, it can actually be alarming to find that most modern IT offerings want to meter everything to you, and you can no longer purchase it and pay maintenance. This is distressing enough that I’ve had more than one CIO complain to me that it feels like buying all their IT over again every year, an expectation that will have to be managed by vendors. There are so many projections in this space that I’ll just select one overall forecast of IaaS, PaaS, and SaaS as a whole, which is expected to become a $390 billion industry in just 2.5 years, by 2020.
    • Workplace hubs.
    • There is growing interest in streamlining and making the workplace more efficient and centralized. Many of the latest tools — from Slack and Microsoft Teams to IBM Connections (with AppSpokes) and Cisco Spark — are creating powerful new workplace hubs that allow systems of record and systems of engagement to come together more effectively into a consistent and contextual digital workplace, complete with integrated apps. How big is this trend so far? It’s challenging to estimate as there is no dedicated forecasting in this category yet. However, I am included it as a clear industry trend based on the inclusion of these capabilities in most of the latest enterprise collaboration offerings.
    • Edge/fog computing. As Internet of Things and other computing form factors that move data gathering out to the far-flung sides of the network grow in scale and data volume, there has emerged a growing need to put more intelligent processing at the edge of the network, rather than transporting it across the cloud. Instead of a cross-trend to the cloud, edge computing (sometime called fog computing) complements it by putting computing power in cloud-friendly technology packages where it makes the most sense for cost and performance reasons. Edge computing will grow by 35 percent annually through 2023, when it will become a $34 billion industry.
    • Adaptive cybersecurity.
    • Perhaps the real top priority of many CIOs, cybersecurity has assumed a preeminent place in IT strategy and investment, despite being almost exclusively a cost center that keeps the business running and customers safe. Adaptive cybersecurity, which uses a combination of artificial intelligence and other methods to dynamically shift tactics and detect/remove threats as quickly as possible, is among the very forefront of security methods. Adaptive cybersecurity will grow by 15 percent a year and will become a $7 billion industry by 2021.
    • Team collaboration.
    • Smaller scale collaboration has become very popular in the last few years, augmenting the big shift toward enterprise-scale collaboration five years ago or so. The rise of nimbler, more team based tools like Slack has been well documented, as have the dozens of me-too competitors. At the same time, many applications have adopted chat tools within them and consumer services like WhatsApp are used for business on a wide basis. Enterprises have been forced to realize multi-layered collaboration strategies to cope. Nevertheless, it’s clear that the resurgence of team collaboration is here to stay. Overall, the global cloud collaboration market (where the vast majority of team collaboration is offered today) is growing at 13 percent a year and will be a $38 billion industry by 2020.
    • Marketing integration.
    • One of the worst-kept secrets of the marketing technology industry is that almost none of it fits together without manual integration, despite a rapidly expanding multichannel world where this is far-and-away the largest problem that’s currently reported by brands. Yet as I explored recently in the struggles of companies to gain a single view of the customer, the explosion of marketing solutions is making the problem worse, not better. Yet there is no actual category of marketing integration tools, though a good number of solutions apply at least to some of the issue. There will be volumes written about the mismatch between marketing technology availability and actual customer needs today, but we can use marketing automation as a related “stand-in” industry that does some “martech” integration, which will grow at 11 percent a year and be an $8 billion industry by 2025.
    • Digital twins.
    • One of the new entrants to the main list this year, digital twins are software-based replica of business assets, processes, and systems — especially ones based on IoT — that can be used for various purposes such as modeling, forecasting, and business transformation and has been trumpeted prominently by market leaders like GE as a key to successful digital transition. Organizations can increase predictability, lower risk, and test innovation much more quickly using their digital twins. As a very new enterprise concept, there is no publicly available market forecast yet for digital twins, but Gartner has prominently included it in its top 10 strategic tech trends for 2017.
    • Multichannel digital experience.
    • As I explored in the marketing integration pieces, creating a cohesive experience across multiple digital channels (mobile, social, devices, apps, etc.) remains a top challenge for organizations, and one that goes well beyond marketing. Often known as the “omnichannel” problem, the issue is that new digital channels emerge and become important far faster than the response windows of digital experience teams. Digital experience capabilities help outsource the solution to this channel fragmentation issue. Also known as customer experience management (CEM), though I don’t use the term because customer is a misnomer as the digital experience must be managed alike for customers, prospects, suppliers, partners, and the workforce. The digital experience industry will grow by 21 percent a year and become a $13 billion industry by 2021.
    • Microservices.
    • A more refined and fine-grained way to architect modern IT, microservices have gained the upper hand as the leading way to open up data and systems for use and reuse by other parts of the business and for open APIs to 3rd party suppliers and developers. As a key part of the strategic digital ecosystem story, microservices will grow at a reported 16 percent a year and be a $10 billion industry within a few years.
    • Digital transformation target platforms.
    • These are capabilities built on top of enterprise cloud stacks from the likes of Amazon, Microsoft, and Google Cloud that provide patterns, templates, industry accelerators, emerging tech capabilities like blockchain and IoT in business solution frameworks, to provide a proven path through which to implement an enterprise-scale digital transformation. One recent notable example of this product category is SAP Leonardo. I’ll be posting my findings on other top solutions soon. There is no market estimate yet of this brand new digital category.
    • Digital learning.
    • Retiring MOOCs and global solution networks as explicit entrants from last year’s list, which are still important categories, but subsumed into this larger category, digital learning — essential to staff the modern digital enterprise with talent — is shifting to more sophisticated models, from microlearning and adaptive learning systems, even as community-based models remain as important and fast-growing as ever. The overall smart learning market is a juggernaut, as education is generally, and will grow at 25 percent yearly to be a $584 billion industry by 2021.
    • Artificial intelligence.
    • Cognitive systems have become powerful enough to begin cracking some of our most challenging business issues and is at the top of venture capital, acquisition, and enterprise IT priority lists of many organizations. The industry is expected to grow at a 52 percent annual pace and be a $36 billion market by 2025.
    • Customer journey management.
    • Using data to dynamically provide the best quality, adaptive, and personalized customer experience across an organization’s various silos (marketing, sales, operations, customer care, etc.) is the next and more strategic progression of multichannel digital experience. While still allocated to the customer experience management function, it’s a separate concern that can and is often dealt with separately. Again, this is an emerging product category, but in the sense it realizes an effective data-driven customer experience, it will be a 14 percent year-over-year growth category that will turn into a $12 billion enterprise industry by 2023.
    • Internet of Things (IoT) and Internet of Everything (IoE.)
    • As just about everything manufactured object in the world — and quite a few non-manufactured objects which will be instrumented with sensors — is becoming pervasively connected, the number of devices on our networks is set to grow by many orders of magnitude. This creates large business opportunities for organizations ready to capitalize on the global streams of data, analysis, and two-way ability to control and converse that IoT represents. IoE is even more strategic and has become a catch-all phrase that describes adding both connectivity and intelligence to practically every device and connected scenario in order to give them useful smart functions. IoT numbers almost always impress due to their scale. The IoT market will be $267 billion size by 2020, with at least a 20-percent compound annual growth rate (CAGR) at every level of the IoT stack. For its part, IoE is estimated to become a vast $7 trillion industry through a 16 percent growth rate, due to so much of the connected computing universe being attributable to it.
    • Blockchain and distributed ledgers.
    • A complex yet historic amalgam of network, cyrptopgraphy, and database technologies, blockchain and decentralized record systems like it are making big waves in industries like healthcare, insurance, and especially finance, given blockchain’s roots in Bitcoin. While many organizations are grappling with the implications of decentralized, open record keeping to their business models, the writing is on the wall: Most legacy transaction logging systems that are closed and proprietary are likely nearing the end of their useful lifetime. Blockchain and related models for digital ledgers are expected to grow at a 58 percent annual rate and create a $5.4 billion dollar market by 2023.
    • Social business.
    • Long a combined technology and mindset approach to create more connected and effective communities and organizations, social business remains the most strategic set of ideas and tools to create modern organizations using new communications and collaboration methods. Along the way, the approach has logged hard data on its benefits. While the term itself is aging out, the practice remains at at all time high in organizations and is growing steadily at a 26 percent annual rate through platforms like enterprise social networks and social business analytics.
    • Open APIs.
    • Part and parcel with the microservices discussion, which it now has a lot of overlap with, open APIs have come of age to open up IT for reuse and remixing within organizations and especially out to developer communities and business partners. I’ve been sanguine about this approach for a decade and it’s finally matured into a major industry. While APIs represent many types of technologies and approaches, one key barometer is API management platforms, which will be a $3.4 billion market by 2023 via a 33 percent annual growth rate.
  • Collaborative economy.
  • Also known as the sharing economy, the approach for using the Web as a platform to exchange goods and services more directly and democratically has had its up and downs over the years. Although the implications of the collaborative economy, originally coined by Jeremiah Owyang, go to the very heart of business models and have disrupted entire industries from hospitality to transportation, its proven a bit harder a model to repeatably delivery on that some originally thought, even though my opinion is that most industries have yet to feel the brunt of it. That said, respected organizations like the Brookings Institute have pegged the sharing economy at a whopping $335 billion in yearly revenue by 2025. Consequently, it very much belongs as a core, though existentially challenging, technology on this list again this year.

The upshot is that there are a great many technologies on the enterprise tech to watch list, an all-time high in fact, never mind the horizon list, which is poised to be even more disruptive in many cases. As I pointed out a few years ago, technology cycles are coming more and more quickly and fixed, traditional strategic planning cannot take them into account adequately.

For most organizations, this will mean all new ways of thinking about and managing the technology adoption life cycle. Fortunately, we have fresh choices and new ways of activating forces for change at scale that do seem to be able to better accommodate the size and scope of the challenge at hand. In the meantime, we live in very exciting times indeed, even though it’s still literally just the dawn of digital technology in the enterprise.

Sourced from >> By for Enterprise Web 2.0

Additional reading

Henry Sapiecha

An analysis of the world’s most valuable scientific documents and manuscripts, and it illustrates both how far science has come in a relatively short time, and how little we value our legacy in monetary terms

This is the second of a six-part series covering the most valuable scientific documents and manuscripts from #50 to #41. The introduction to the marketplace is the first part of the series and #40-31, #30-21, #20-11 and #10-1 will follow over consecutive days. Links to other parts of the series will be added here as they are published

50 – Autograph manuscript of Einstein’s first scientific essay

top-50-most-valuable-scientific-books-manuscripts-sold-auction-Einstein's first scientific paper, written at 16 years of age image

Price: US$676,369 (£344,000)

Estimate: £300,000 – £500,000

Created: circa 1894 – 1895

Significance: Einstein’s first scientific paper, written at 16 years of age (he is pictured above at 14), contains the seeds of the theory of relativity. It pursues an inquiry relating to the ether, the elastic substance which, according to the science of the day, filled all of space. It was Einstein’s continued interest in questions on the boundary between mechanics and electro-magnetics that provided the departure point for his 1905 special theory of relativity, which was to cause the final abandonment of the ether concept.

Some perspective on the price: Items to sell for a similar amount at auction include Marilyn Monroe’s baby grand piano ($662,500), Dorothy’s ruby slippers from the Wizard of Oz ($666,000), an Olympic Games Torch from the 1952 Helsinki games ($658,350), the jersey worn by American captain Mike Eruzione in the “The Miracle on Ice” Gold Medal Ice Hockey game at the 1980 Winter Olympics ($657,250), a Babe Ruth New York Yankees jersey ($657,250), a pocket watch given to Babe Ruth by the New York Yankees ($650,108), plus original art from an Incredible Hulk comic book ($657,250) and original art from a Spider-Man comic book ($657,250).

49 – Journey of Discovery to Port Phillip, New South Wales
by William Hovell and Hamilton Hume

top-50-most-valuable-scientific-books-manuscripts-sold-auction-Journey of Discovery to Port Phillip, New South Wales image

Price: $688,286 (AUD932,000)

Estimate: AUD750,000 – AUD850,000

Created: The overland exploration detailed in the book was undertaken in 1824 and 1825, the book was published in 1837, but this copy was one of a few printer’s proofs created in 1831.

Significance: The only unpublished proof copy in private hands of a landmark book about the exploration of Australia. Look closely at the map above (from a Sotheby’s auctioned copy of the second edition) and you’ll see that Port Phillip is the area upon which Australia’s second largest city, Melbourne, now sits. The auction copy was given to French navigator Louis de Freycinet (1779 – 1841), whose annotations to the text can be seen in the auction copy alongside those of its editor, convicted murderer and subsequently member of Parliament, Dr William Bland. Freycinet was the first person to publish a map showing the full coastline of Australia in 1811. The full text of the most expensive Australian book ever to sell at auction has been digitized and is available for free online via Project Gutenberg.

top-50-most-valuable-scientific-books-manuscripts-The world's most expensive movie poster sold for $690,000 sold-auction-image

The world’s most expensive movie poster sold for $690,000

Some perspective on price: In terms of items that have sold for a similar amount at auction, the world’s most expensive movie poster sold for $690,000 at a Reel Galleries auction in November, 2005. The poster is one of just four surviving from the epic 153–minute 1927 silent movie classic Metropolis, the story of a dystopian future set in the year 2000 and one of the first feature films to pioneer the science fiction genre. German artist Heinz Schulz-Neudamm (1899-1969) created the poster, the novel and screenplay were written by Thea Von Harbou (1888-1954), and the film was directed by Thea’s husband, Fritz Lang (1890-1976). You can watch the trailer for the remastered original movie here.

48 – The Principal Navigations, Voiages, Traffiques and Discoveries of the English Nation
top-50-most-valuable-scientific-books-manuscripts-The Principal Navigations sold-auction-image

Price: $743,687 (£458,500)

Estimate: £180,000 — £240,000

Created: The copy that achieved this price was of three volumes bound as two, dated 1598, 1599 and 1600 respectively. This copy is the first issue of the second edition with Volume One dated 1598. The first edition was published in 1589, with this copy of the second edition greatly expanded and including the very rare Wright-Molyneux world map.

Significance: Though Richard Hakluyt (1553 – 1616) never traveled further from England than France to assemble this work, he met or corresponded with many of the great explorers, navigators and cartographers, including Sir Francis Drake, Sir Walter Raleigh, Sir Humphrey Gilbert, Sir Martin Frobisher, Abraham Ortelius and Gerardus Mercator. In addition to long and significant descriptions of the Americas, this work also contains accounts of Russia, Scandinavia, the Mediterranean, Turkey, Middle East, Persia, India, South-East Asia and Africa. This copy includes an account of “the famous victorie atchieved at the citie of Cadiz” (by Sir Francis Drake), which was ordered to be suppressed in 1599, and therefore is sometimes missing in copies of this work.

The Wright-Molyneux map is based on Mercator’s projection, which Mercator expected would be a valuable tool to navigators, and this map was one of the first to use it. Unfortunately, Mercator gave no explanation as to the underlying mathematics used to create the map and it was left to Edward Wright to explain it in Certain Errors in Navigation Detected and Corrected (1599), hence the projection sometimes being called the Wright Projection by English mapmakers. The map is linked to Emery Molyneux, whose globe of 1592 provided most of the geographical information it contains. Hakluyt’s use of this map in his publication was to show “so much of the world as hath beene hetherto discouered, and is comme to our knowledge.” Wright later translated John Napier‘s pioneering 1614 work that introduced the idea of logarithms from Latin into English.

top-50-most-valuable-scientific-books-manuscripts-Two fascinating scientific instruments sold-auction-image

Some perspective on price: Two fascinating scientific instruments have also sold in this price range, being a Gilt and Brass Astronomical Table Clock (above left) made in Augsburg (Germany) circa 1560 – 70, which sold for $725,000 at a Christies (New York) auction in January, 2015, and a brass Astrolabe made by Muhammad ibn Ahmad al-Battûtî in Morocco, circa 1757, which sold for £421,250 ($729,021) at a Sotheby’s (London) auction in October, 2008.

47 – Les Voyages du Sieur de Champlain Xaintongeois
by Samuel de Champlain

top-50-most-valuable-scientific-books-manuscripts-Les Voyages du Sieur de Champlain Xaintongeois

Price: $758,000

Estimate: $250,000 – $350,000

Created: 1613

Significance: The renowned Siebert copy of this first edition landmark of French Americana and New World exploration, a pioneering work in ethnography and the first accurate mapping of the New England coast. One of the finest copies of this work extant, it previously sold in May, 1999 at a Sotheby’s New York auction for $360,000.

From the auction description: One of the most important works of the 17th century, remarkable in its content and execution, being the work of one man – a gifted naturalist, an artist (trained as a portrait painter in France), a skilled cartographer and sympathetic ethnographer. Samuel de Champlain’s account of his voyages of 1604, 1610, 1611 and 1613 are a key exploration narrative, one considerably enhanced by the author’s lively illustrations in which he records his mapping of a vast area with unprecedented detail and accuracy, while also depicting the flora and fauna of the New World. The vignettes within the rare Carte Geographique de la Nouvelle Franse are an artist’s rendition of new species, giving a hint of the varied and vast natural resources to be found in the New World. Of this monumental cartographic endeavor, Armstrong called the map, “not the work of a bureaucrat, but of a skillful pyschologist, promoter and politician…Champlain’s map of 1612 is the most important historical cartography of Canada.”

You can read the complete book (albeit in French) at Bibliothèque Nationale de France or see the main illustrations in detail at Canada’s McGill Bibliothèque.

top-50-most-valuable-scientific-books-manuscripts-1777 manuscript map of New York Island from the American Revolutionary War sold-auction-image

Some perspective on price: Interestingly, several other items of historical significance to the United States have sold for a similar amount at auction. These include a 1777 manuscript map of New York Island from the American Revolutionary War (above) that fetched $782,500, the original autograph manuscript by Julia Ward Howe of “The Battle Hymn of the Republic” that also fetched $782,500, W.I. Stone’s 1823 “50th Anniversary” engraving of the ‘Declaration of Independence’ that also sold for $782,500, and a draft manuscript of the United Kingdom’s Stamp Act of 1765 (an effort to heavily tax the colonies and a catalyst for the American Revolution), that sold for $755,000.

46 – The Decades of the Newe Worlde
by Pietro Martire d’Anghiera

top-50-most-valuable-scientific-books-manuscripts-The Decades of the Newe Worlde sold-auction-image

Price: $768,000

Estimate: $80,000 – $120,000

Created: Published 1555 but translated from works in other languages produced over the previous 75 years.

Significance: The full title of this book is The Decades of the newe worlde or west India, Conteyning the nauigations and conquestes of the Spanyardes, with the particular description of the moste ryche and large landes and Ilandes lately founde in the west Ocean perteynyng to the inheritance of the kinges of Spayne.

It is the first series of narratives on epic voyages voyages based on the first three Decades of Peter Martyr (Pietro Martire d’Anghiera – read the text in English here), which were originally written in Latin between 1511 and 1530. The book was edited and translated into English by Richard Eden and published in London by William Powell in 1555. The auctioned book sold for almost 10 times its estimate, mainly due to its significance as the first edition of the first collection of voyages printed in English, and the first work to contain narratives of English voyages.

Besides the three Decades of Peter Martyre, it contains a translation of that author’s “De nuper sub D. Carolo repertis Insulis” (describing the voyages of Francisco Hernández de Córdoba, Juan de Grijalva, and Hernán Cortés), the Bull of Pope Alexander (by which he decreed that the world was to be divided between Spain and Portugal), as well as translations of the most important parts of the works pertaining to the maritime discovery of the New World by Oviedo, Maximilian of Transylvania, Vespuccius, Gomara and others.

This book is quite a compendium of important work, as it also contains the first printed English treatise on the compass, the first description of “What degrees are,” and “A demonstration of the roundness of the Earth.”

In the book’s preface, the colonization of North America by the English is advocated for the first time and according to The art of navigation in England in Elizabethan and early Stuart times, “for over a quarter of a century it proved to be the English source-book of geographical and navigational knowledge” and “as such it was to be of the utmost value to men like Hawkins and Drake.”

Emphasizing this last point is the book’s provenance – this book was Roger North‘s copy. In 1617, North had sailed with Sir Walter Raleigh in his second expedition to Guiana in South America in search of the mythical “city of gold” known as El Dorado, and in 1620, North was a prime mover behind attempts to establish an English colony on the River Amazon delta. The book bears his signature on the title as well as his motto, “Durum Pati,” believed to be an abbreviation of Horace’s “Durum, sed levius fit patientia…” (‘Tis hard! But that which we are not permitted to correct is rendered lighter by patience). The book is available in full on the Internet Archive.

Some perspective on price: The baseball hit by Barry Bonds for career home run #756, (breaking the all-time home run record for the American MLB), sold for $752,467 at an SCP auction in 2007.

45 – The Atlantic Neptune published for the use of the Royal Navy of Great Britain by Joseph Des Barres

top-50-most-valuable-scientific-books-manuscripts-Swiss cartographer Joseph Frederick Wallet Des Barres (1722-1824) sold-auction-image

Price: $779,200

Estimate: $400,000 – $600,000

Created: 1774-1779

Significance: Swiss cartographer Joseph Frederick Wallet Des Barres (1722-1824) was a member of the famous Huguenot family who studied mathematics under Daniel Bernoulli at the University of Basel, then military surveying at Great Britain’s Royal Military Academy, leading to a commission in 1756 into the Royal Americans and a role as a cartographer in the Seven Years’ War. Using documents captured at Louisbourg, Des Barres compiled a large-scale chart of the St. Lawrence River and Gulf, which enabled the British Navy to navigate its warships to and take control of the French capital at Quebec. The victory demonstrated the benefits of accurate marine surveys, and Des Barres’ capability in particular, resulting in the admiralty providing him with the resources to accurately chart the coast of Atlantic Canada, and the eastern seaboard from New England to the West Indies. This book resulted some 17 years later: a maritime atlas that set the standard for nautical charting for half a century.

Some perspective on price: Several copies of this work have achieved similar high figures, and it is clear that both mariners and historians considered it to be “the most splendid collection of charts, plans and views ever published.”

44 – Atlas Sive Cosmographicae Meditationes De Fabrica Mundi Et Fabricati Figura by Gerard Mercator

top-50-most-valuable-scientific-books-manuscripts-The first atlas to be so called sold-auction-image

Price: $783,346 (£422,400)

Estimate: £60,000 — £80,000

Created: 1595

Significance: The first atlas to be so called. The first four parts had been published between 1585 and 1589 (see previous lot). To these were added a fifth and final part, Atlantis pars altera, published in 1595, a year after Mercator’s death, and overseen by his son Rumold. This part includes maps of the world and the continents. The complete atlas was dedicated to Queen Elizabeth and the whole was preceded by the famous engraved general title-page showing Atlas measuring the world with a pair of dividers. Interestingly, Mercator refers to Atlas, King of Mauretania (now Morocco), a mathematician and philosopher who is generally credited with having made the first celestial globe, not the mythical Greek god Atlas, whose punishment was to carry the world and heavens on his shoulders. We humans certainly have a propensity to get our stories mixed up.

43 – Viviparous Quadrupeds of North America
by John James Audubon

top-50-most-valuable-scientific-books-manuscripts-John James Audubon's second masterpiece. Viviparous sold-auction-image

Price: $793,000

Estimate: $600,000 – $700,000

Created: 1845 – 1854

Significance: The most expensive of numerous copies of John James Audubon’s second masterpiece. “Viviparous” means birthing young from within the body, so this book is essentially a study of North American mammalian wildlife, and like Audubon’s best known “Birds of America,” each is superbly illustrated in its natural habitat. Equally as as impressive and sweeping as his ornithological work, the “Viviparous Quadrupeds of North America” is the result of the artist/naturalist’s years of field research, travel, and seemingly endless study and is the outstanding work on American animals produced in the 19th-century. The entire book has been digitized by the University of Michigan’s Special Collections Library and is available in high resolution for free download and use, with attribution.

42 – Globus Mundi

top-50-most-valuable-scientific-books-manuscripts-Globus Mundi sold-auction-image

Price: $837,227 (€600,000)

Estimate: €500,000

Created: 1509

Significance:Globus Mundi” does not list an author, but is considered so valuable because it is the first book on cosmography to officially use the term America as the common name to describe the “New World.” It was published in Strasbourg (Germany) in 1509 by Johann Grüninger.

top-50-most-valuable-scientific-books-manuscripts-An astrolabe made for the Duke of Parma by Erasmus Habermel sold-auction-image

Some perspective on price: An astrolabe made for the Duke of Parma by Erasmus Habermel sold for $841,070 (£540,500) at a Christies (London) auction in October, 1995.

41 – Aves Ad Vivum Depictae A Petro Holysten Celeberrimo Picture by Pieter Holsteyn the Younger

top-50-most-valuable-scientific-books-manuscripts-Pieter Holsteyn II (1614 – 1673) sold-auction-image

Price: $850,0000

Estimate: $300,000 – $500,000

Created: circa 1638

Significance: Pieter Holsteyn II (1614 – 1673) worked closely with his father, Pieter Holsteyn the Elder, in producing fine gouaches and watercolor natural history portraits and botanicals and grew to become one of the Dutch Golden Age watercolor masters. His particular skill was the delicate, but detailed depiction of many of the new and exotic species being returned to Amsterdam from the voyages of the Dutch East India Company. This particular book is extremely rare as most of the natural history albums produced in the 17th century have long since been broken apart and the images sold piecemeal. The book is renowned for its famous illustration of the now extinct White Dodo.

Some perspective on price: A similar collection is for sale at Arader Galleries in New York at a price of $4.5 million.

Continue reading in the third part of the series, numbers 40-31.



Henry Sapiecha



From the rare scribblings of Alan Turing through to the genius of Newton, Einstein and Madame Curie, we continue to navigate our way through the fascinating list of the 50 most valuable scientific documents of all-time.

This is a representation of what is to come in the series of the 50 very important scientific documents.

Next postings will be detailing what the docs are.So watch for them here.

Collectibles Feature

The most valuable scientific documents of all-time #20-11

Collectibles Feature

The most valuable scientific documents of all-time #30-21

The most valuable scientific documents of all-time numbers #40-31
Collectibles Feature

The most valuable scientific documents of all-time #40-31

An analysis of the world's most valuable scientific documents and manuscripts, and it illustrates both how ...
Collectibles Feature

The most valuable scientific documents of all-time #50-41

An analysis of the world's most valuable scientific documents and manuscripts, and it illustrates both how ...
Collectibles Feature

The world’s most valuable scientific books and manuscripts – an overview of the marketplace


Henry Sapiecha

Belvedere Ampitheatre image

Belvedere Ampitheatre will come alive at dusk on Saturday. Photo: Peter Solness

National Science Week is upon us and it’s being celebrated locally as the inaugural Sydney Science Festival.

Naturally there are the big-name events, such as singing astronaut Commander Chris Hadfield, US “celebrity” astrophysicist Neil deGrasse Tyson and Dr Karl Kruszelnicki​.

But there are plenty of other events that you should put into your diary.

Here are some that are well worth catching.


1…Field of Orbs
Saturday, August 15, 5.30pm, Centennial Park.

Field of Orbs image

Belvedere Amphitheatre, Centennial Park. Photo: Centennial Parklands

Be part of a light painting extravaganza in Centennial Park as the sun goes down on Saturday evening.

2…100 years of Einstein’s gravity revolution
Monday, August 17, 6.30pm, University of Sydney.
A handwritten detail from Albert Einstein's general theory of relativity image

A handwritten detail from Albert Einstein’s general theory of relativity. Photo: David Silverman

Albert Einstein published his general theory of relativity in 1915. The world has never been the same since. Sydney University’s Professor Geraint Lewis will discuss how his theories of space-time, black holes and expanding universes have changed our world.

3…Lloyd Godson, Undersea Survivor
August 18-23, Maritime Museum.
Underwater living enthusiast Lloyd Godson image

Underwater living enthusiast Lloyd Godson. Photo: Ocean Exploration Trust

Lloyd Godson is passionate about the human potential for living under the sea. Using technology from Google Lloyd will be streaming from his prototype underwater habitat. Check it out.

4...Kinda Thinky panel discussion on ‘Excess’
Wednesday, August 19, Powerhouse bar.
Kinda Thinky panel discussion image

Kinda Thinky is an irreverent theme-driven live chat show.

How much stuff is too much? And why do we all want to live longer? Featuring Father Rod Bower, the Gosford Anglican priest behind the provocative political church signs, with recycler David Singh, longevity expert Dr Samantha Solon-Biet, and architect Melonie Bayl-Smith. It’s adults only, with a cash bar. Hosted by Will Grant and Rod Lamberts. Should be fun.

5…Know your own genome
Thursday, August 20, 5pm, Garvan Institute, Darlinghurst.A replica of the human neuropeptide Y gene at the Garvan Institute image

A replica of the human neuropeptide Y gene at the Garvan Institute. Photo: Wolter Peeters

Do you really want to know your own genome? Discover what genetic testing might reveal and the ethics behind finding out. With world-renowned genetic counsellor Professor Kelly Ormond.

6…Particle Fever
Thursday, August 20, 6pm, University of Sydney.
Particle accelerator switzerland image

The Large Hadron particle accelerator at CERN, Switzerland.

The story behind the Large Hadron Collider and the hunt for the Higgs boson. With a special live introduction by Associate Professor Kevin Varvell​, Sydney director of the ARC Centre of Excellence for Particle Physics.

7…Quantum computing and teleportation
Thursday, August 20, 6.30pm, Footbridge Theatre, University of Sydney. 

An atomic-scale transistor image

An atomic-scale transistor. Photo: UNSW

Will quantum computers really allow us to teleport objects including ourselves? Join world-class physicists Professor Michelle Simmons, University of New South Wales, and Professor Ping Koy Lam, Australian National University, for a discussion about teleportation and other strange properties of the quantum world.

8…Family day: Indigenous science experience
Saturday, August 22, 10am-4pm, Redfern Community Centre.
Emu in the Milky Way image

Emu in the Milky Way, courtesy Barnaby Norris. Photo: act\ian.warden

A hands-on exploration and celebration of Aboriginal and European science, demonstrating the value of traditional knowledge. Hosted by the National Indigenous Science Education Program, Macquarie University, Inspiring Australia, Redfern Community Centre and the City of Sydney.

9…SMH Live: Science and Innovation
Thursday, August 27, 6pm, Australian National Maritime Museum.

science-SMH team image
Can we ever be the clever country? While the rest of the world is embracing science and innovation in the hunt for new jobs and greater economic opportunities, Australia is at risk of lagging behind. Join SMH’s Science Editor, Nicky Phillips, and our expert panel of commentators as they unpick the challenges that stand in the way of us being a truly, science and innovation-led nation.
Henry Sapiecha

June 10th marks the (alleged) 263rd anniversary of the day Benjamin Franklin conducted his famous experiment. In celebration, Ohio University put together an inforgraphic that delves into other cool experiments that led to major breakthroughs.

Beyond-Kite-Key-infographic image


The now-discredited study got headlines because it offered hope. It seemed to prove that our sense of empathy, our basic humanity, could overcome prejudice and bridge seemingly irreconcilable differences. It was heartwarming, and it was utter bunkum. The good news is that this particular case of scientific fraud isn’t going to do much damage to anyone but the people who concocted and published the study. The bad news is that the alleged deception is a symptom of a weakness at the heart of the scientific establishment.

The study in question was the brainchild of Michael LaCour, a graduate student at UCLA, along with Donald Green, a professor of political science at Columbia University. Using surveys, they showed that a 20-minute conversation with a gay person would soften the hearts of opponents to same-sex marriage. The simple act of putting a face on an issue could begin to dissolve abstract ideology and entrenched hostility.

When it was published in Science magazine last December, the research attracted academic as well as media attention; it seemed to provide solid evidence that increasing contact between minority and majority groups could reduce prejudice.

But in May, other researchers tried to reproduce the study using the same methods, and failed. Upon closer examination, they uncovered a number of devastating “irregularities” – statistical quirks and troubling patterns – that strongly implied that the whole LaCour/Green study was based upon made-up data.

The data hit the fan, at which point Green distanced himself from the survey and called for the Science article to be retracted. The professor even told Retraction Watch, the website that broke the story, that all he’d really done was help LaCour write up the findings. What’s more, Green said that he initially had doubts about the results, which were “so astonishing” that they “would only be credible if the study were replicated”. After LaCour “replicated” the result, Green was satisfied, apparently without ever looking at the original survey responses.

Science magazine didn’t shoulder any blame, either. In a statement, editor in chief Marcia McNutt said the magazine was essentially helpless against the depredations of a clever hoaxer: “No peer review process is perfect, and in fact it is very difficult for peer reviewers to detect artful fraud.”

This is, unfortunately, accurate. In a scientific collaboration, a smart grad student can pull the wool over his adviser’s eyes – or vice versa. And if close collaborators aren’t going to catch the problem, it’s no surprise that outside reviewers dragooned into critiquing the research for a journal won’t catch it either. A modern science article rests on a foundation of trust.

Which is a sign that something is very amiss.

Sure, it’s an act of bad faith when a grad student fools his adviser with a fake survey, but it’s also a predictable consequence of the scientific community’s winking at the practice of senior scientists putting their names on junior researchers’ work without getting elbow-deep in the guts of the research themselves.

It’s all too common for a scientific fraud – last year’s Japanese stem-cell meltdown, a 2011 chemistry scandal at Columbia University, the famous materials-science fiasco involving Bell Laboratories’ Jan Hendrik Schon – to feature a young protege and a well-established scientist. The protege delivers great results; the stunningly incurious mentor asks no questions.

And, sure, it’s an act of bad faith when a scientist submits false data to a journal; but the scientific publishing industry encourages such behaviour through lax standards.

You don’t have to look far to find dramatic failures. Recently two scientific publishing houses had to withdraw dozens upon dozens of nonsense papers – computer-generated gobbledygook that somehow passed the peer review process. If the process can’t catch such obvious fraud – a hoax the perpetrators probably thought wouldn’t work – it’s no wonder that so many scientists feel emboldened to sneak a plagiarised passage or two past the gatekeepers.

There’s a deeper structural issue: Major peer-review journals tend to accept big, surprising, headline-grabbing results when those are precisely the ones that are most likely to be wrong. Replications (and failed replications), and less-than-spectacular results are incredibly difficult to get published, even though these constitute the true spine of the scientific endeavour. When scientists are rewarded for producing flashy publications at a rapid pace, we can’t be surprised that fraud is occasionally the consequence.

Despite the artful passing of the buck by LaCour’s senior colleague and the editors of Science magazine, affairs like this are seldom truly the product of a single dishonest grad student. Scientific publishers and veteran scientists – even when they don’t take an active part in deception – must recognise that they are ultimately responsible for the culture producing the steady drip-drip-drip of falsification, exaggeration and outright fabrication eroding the discipline they serve.

Charles Seife is a professor of journalism at NYU. His most recent book, Virtual Unreality, is about deception in the digital world.

Los Angeles Times


Henry Sapiecha

University of michigan_color_seal logo image

ANN ARBOR–An odd, iridescent material that’s puzzled physicists for decades turns out to be an exotic state of matter that could open a new path to quantum computers and other next-generation electronics.

Physicists at the University of Michigan have discovered or confirmed several properties of the compound samarium hexaboride that raise hopes for finding the silicon of the quantum era. They say their results also close the case of how to classify the material–a mystery that has been investigated since the late 1960s.

The researchers provide the first direct evidence that samarium hexaboride, abbreviated SmB6, is a topological insulator. Topological insulators are, to physicists, an exciting class of solids that conduct electricity like a metal across their surface, but block the flow of current like rubber through their interior. They behave in this two-faced way despite that their chemical composition is the same throughout.

The U-M scientists used a technique called torque magnetometry to observe tell-tale oscillations in the material’s response to a magnetic field that reveal how electric current moves through it. Their technique also showed that the surface of samarium hexaboride holds rare Dirac electrons, particles with the potential to help researchers overcome one of the biggest hurdles in quantum computing.

These properties are particularly enticing to scientists because SmB6 is considered a strongly correlated material. Its electrons interact more closely with one another than most solids. This helps its interior maintain electricity-blocking behavior.

This deeper understanding of samarium hexaboride raises the possibility that engineers might one day route the flow of electric current in quantum computers like they do on silicon in conventional electronics, said Lu Li, assistant professor of physics in the College of Literature, Science, and the Arts and a co-author of a paper on the findings published in Science.

“Before this, no one had found Dirac electrons in a strongly correlated material,” Li said. “We thought strong correlation would hurt them, but now we know it doesn’t. While I don’t think this material is the answer, now we know that this combination of properties is possible and we can look for other candidates.”

The drawback of samarium hexaboride is that the researchers only observed these behaviors at ultracold temperatures.

Quantum computers use particles like atoms or electrons to perform processing and memory tasks. They could offer dramatic increases in computing power due to their ability to carry out scores of calculations at once. Because they could factor numbers much faster than conventional computers, they would greatly improve computer security.

In quantum computers, “qubits” stand in for the 0s and 1s of conventional computers’ binary code. While a conventional bit can be either a 0 or a 1, a qubit could be both at the same time–only until you measure it, that is. Measuring a quantum system forces it to pick one state, which eliminates its main advantage.

Dirac electrons, named after the English physicist whose equations describe their behavior, straddle the realms of classical and quantum physics, Li said. Working together with other materials, they could be capable of clumping together into a new kind of qubit that would change the properties of a material in a way that could be measured indirectly, without the qubit sensing it. The qubit could remain in both states.

While these applications are intriguing, the researchers are most enthusiastic about the fundamental science they’ve uncovered.

“In the science business you have concepts that tell you it should be this or that and when it’s two things at once, that’s a sign you have something interesting to find,” said Jim Allen, an emeritus professor of physics who studied samarium hexaboride for 30 years. “Mysteries are always intriguing to people who do curiosity-driven research.”

Allen thought for years that samarium hexaboride must be a flawed insulator that behaved like a metal at low temperatures because of defects and impurities, but he couldn’t align that with all of its other properties.

“The prediction several years ago about it being a topological insulator makes a lightbulb go off if you’re an old guy like me and you’ve been living with this stuff your whole life,” Allen said.

In 2010, Kai Sun, assistant professor of physics at U-M, led a group that first posited that SmB6 might be a topological insulator. He and Allen were also involved in seminal U-M experiments led by physics professor Cagliyan Kurdak in 2012 that showed indirectly that the hypothesis was correct.

“But the scientific community is always critical,” Sun said. “They want very strong evidence. We think this experiment finally provides direct proof of our theory.”

Henry Sapiecha

President Obama presents the National Medal of Science to awardee Maya Berenbaum.image

At a White House ceremony last Thursday, President Obama presented the National Medal of Science and National Medal of Technology and Innovation to individuals who have made outstanding contributions to science and engineering. The awards are the nation’s highest honors for achievement and leadership in advancing the fields of science and technology.

“The story of these trailblazers reflects our bigger American story of constant transformation,” President Obama said. “They represent the spirit that has always defined the American people, one of restless searching for the right solution to any problem; an inclination to dream big dreams; and an insistence on making those dreams come true.”

Administered for the White House by the National Science Foundation (NSF), the National Medal of Science was established by the 86th Congress in 1959 as a presidential award to be given to individuals “deserving of special recognition by reason of their outstanding contributions to knowledge in the physical, biological, mathematical or engineering sciences.” In 1980 Congress expanded this recognition to include the social and behavioral sciences.

The National Medal of Technology and Innovation was created by statute in 1980 and is administered for the White House by the U.S. Department of Commerce’s Patent and Trademark Office. The award recognizes those who have made lasting contributions to America’s competitiveness and quality of life and helped strengthen the nation’s technological workforce.

Awarded annually, the Medal honors the Nation’s visionary thinkers whose creativity and intellect have made a lasting impact on the United States and its workforce. The President receives nominations from a committee of presidential appointees based on their extraordinary knowledge of chemistry, engineering, computing, mathematics, and the biological, behavioral/social, and physical sciences.

Among this year’s 10 recipients of the National Medal of Science, nine received NSF support at some point in their research careers, for a cumulative total of more than $35 million.

A committee of 12 scientists and engineers is appointed by the president to evaluate the nominees for the award. Since its establishment, the National Medal of Science has been awarded to 487 distinguished scientists and engineers whose careers spanned decades of research and development. The recipients database, with information from 1962 to the present, is searchable by name, affiliation and other criteria.

The names, affiliations, and short biographies of this year’s National Medal of Science Laureates follow:

Bruce Alberts, University of California, San Francisco

Bruce Alberts is an internationally-renowned biochemist and Professor Emeritus at the University of California, San Francisco. In addition to his research in the field of DNA replication, he is an avid proponent of improving science and mathematics education and international scientific cooperation.

Robert Axelrod, University of Michigan

Robert Axelrod is renowned for his work on the evolution of cooperation and its application across disciplines, from the social sciences to biology and computer science. He is a professor in the Department of Political Science and the Gerald R. Ford School of Public Policy at the University of Michigan.

May Berenbaum, University of Illinois at Urbana-Champaign

May Berenbaum’s pioneering studies of insect-plant co-evolution and her extensive public engagement have made her a world-renowned expert on all insect-related matters. Dr. Berenbaum is Professor and Head of the Department of Entomology at the University of Illinois at Urbana-Champaign.

Alexandre J. Chorin, University of California, Berkeley

Alexandre Chorin is an applied mathematician known for his contributions to computational fluid mechanics. He is a professor of mathematics at the University of California, Berkeley, and a senior scientist at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory.

Thomas Kailath, Stanford University

Thomas Kailath is an electrical engineer known for his contributions to the information and system sciences. He is currently the Hitachi America Professorship of Engineering, Emeritus at Stanford University.

Judith P. Klinman, University of California, Berkeley

Judith Klinman is a physical-organic chemist renowned for her work on enzymes. She is currently a professor of chemistry and of molecular and cell biology at the University of California, Berkeley.

Jerrold Meinwald, Cornell University

Jerrold Meinwald is considered one of the fathers of chemical ecology. He is currently the Goldwin Smith Professor of Chemistry Emeritus at Cornell University.

Burton Richter, SLAC National Accelerator Laboratory and Stanford University

Burton Richter is a Nobel Prize-winning physicist known for co-discovering the J/Psi meson. He is the Paul Pigott Professor in the Physical Sciences at Stanford University.

Sean C. Solomon, Columbia University

Geophysicist Sean Solomon is director of the Lamont-Doherty Earth Observatory of Columbia University, where he is also the William B. Ransford Professor of Earth and Planetary Science.

And a posthumous Medal to:

David Blackwell, University of California, Berkeley

David Blackwell (1919-2010) was a towering figure in the fields of probability, statistics, and the mathematical sciences. He was a professor emeritus of mathematics and statistics at the University of California, Berkeley.

National Medal of Technology and Innovation awardees:

Charles W. Bachman, Mass.

Edith M. Flanigen, UOP, LLC., a Honeywell Company, N.Y.

Eli Harari, SanDisk Corporation, Calif.

Thomas Fogarty, Fogarty Institute for Innovation, Calif.

Arthur D. Levinson, Calico, Calif.

Cherry A. Murray, Harvard University School of Engineering and Applied Sciences

Mary Shaw, Carnegie Mellon University

Douglas Lowy and John Schiller, National Cancer Institute, National Institutes of Health


Henry Sapiecha

Paul Ashby and Deirdre Olynick of Berkeley Lab, standing at the Advanced Light Source (ALS) Extreme Ultraviolet 12.0.1 Beamline. image


This image depicts Paul Ashby and Deirdre Olynick of Berkeley Lab, standing at the Advanced Light Source (ALS) Extreme Ultraviolet 12.0.1 Beamline. Credit: Roy Kaltschmidt, Berkeley Lab

Over the years, computer chips have gotten smaller thanks to advances in materials science and manufacturing technologies. This march of progress, the doubling of transistors on a microprocessor roughly every two years, is called Moore’s Law. But there’s one component of the chip-making process in need of an overhaul if Moore’s law is to continue: the chemical mixture called photoresist. Similar to film used in photography, photoresist, also just called resist, is used to lay down the patterns of ever-shrinking lines and features on a chip.

Now, in a bid to continue decreasing transistor size while increasing computation and energy efficiency, chip-maker Intel has partnered with researchers from the U.S. Department of Energy’s Lawrence Berkeley National Lab (Berkeley Lab) to design an entirely new kind of resist. And importantly, they have done so by characterizing the chemistry of photoresist, crucial to further improve performance in a systematic way. The researchers believe their results could be easily incorporated by companies that make resist, and find their way into manufacturing lines as early as 2017.

The new resist effectively combines the material properties of two pre-existing kinds of resist, achieving the characteristics needed to make smaller features for microprocessors, which include better light sensitivity and mechanical stability, says Paul Ashby, staff scientist at Berkeley Lab’s Molecular Foundry, a DOE Office of Science user facility. “We discovered that mixing chemical groups, including cross linkers and a particular type of ester, could improve the resist’s performance.” The work is published this week in the journal Nanotechnology.

Finding a new kind of photoresist is “one of the largest challenges facing the semiconductor industry in the materials space,” says Patrick Naulleau, director of the Center for X-ray Optics (CXRO) at Berkeley Lab.

Moreover, there’s been very little understanding of the fundamental science of how resist actually works at the chemical level, says Deirdre Olynick, staff scientist at the Molecular Foundry. “Resist is a very complex mixture of materials and it took so long to develop the technology that making huge leaps away from what’s already known has been seen as too risky,” she says. But now the lack of fundamental understanding could potentially put Moore’s Law in jeopardy, she adds.

To understand why resist is so important, consider a simplified explanation of how your microprocessors are made. A silicon wafer, about a foot in diameter, is cleaned and coated with a layer of photoresist. Next ultraviolet light is used to project an image of the desired circuit pattern including components such as wires and transistors on the wafer, chemically altering the resist.

Depending on the type of resist, light either makes it more or less soluble, so when the wafer is immersed in a solvent, the exposed or unexposed areas wash away. The resist protects the material that makes up transistors and wires from being etched away and can allow the material to be selectively deposited. This process of exposure, rinse and etch or deposition is repeated many times until all the components of a chip have been created.

The problem with today’s resist, however, is that it was originally developed for light sources that emit so-called deep ultraviolet light with wavelengths of 248 and 193 nanometers. But to gain finer features on chips, the industry intends to switch to a new light source with a shorter wavelength of just 13.5 nanometers. Called extreme ultraviolet (EUV), this light source has already found its way into manufacturing pilot lines. Unfortunately, today’s photoresist isn’t yet ready for high volume manufacturing.

“The semiconductor industry wants to go to smaller and smaller features,” explains Ashby. While extreme ultraviolet light is a promising technology, he adds, “you also need the resist materials that can pattern to the resolution that extreme ultraviolet can promise.”

So teams led by Ashby and Olynick, which include Berkeley Lab postdoctoral researcher Prashant Kulshreshtha, investigated two types of resist. One is called crosslinking, composed of molecules that form bonds when exposed to ultraviolet light. This kind of resist has good mechanical stability and doesn’t distort during development—that is, tall, thin lines made with it don’t collapse. But if this is achieved with excessive crosslinking, it requires long, expensive exposures. The second kind of resist is highly sensitive, yet doesn’t have the mechanical stability.

When the researchers combined these two types of resist in various concentrations, they found they were able to retain the best properties of both. The materials were tested using the unique EUV patterning capabilities at the CXRO. Using the Nanofabrication and Imaging and Manipulation facilities at the Molecular Foundry to analyze the patterns, the researchers saw improvements in the smoothness of lines created by the photoresist, even as they shrunk the width. Through chemical analysis, they were also able to see how various concentrations of additives affected the cross-linking mechanism and resulting stability and sensitivity.

The researchers say future work includes further optimizing the resist’s chemical formula for the extremely small components required for tomorrow’s microprocessors. The semiconductor industry is currently locking down its manufacturing processes for chips at the so-called 10-nanometer node. If all goes well, these resist materials could play an important role in the process and help Moore’s Law persist.

Henry Sapiecha



Oregon Scientific Australia

Phi Sciences

Sourced & published by Henry Sapiecha