Friday, November 14, 2014

23 INCREDIBLE NEW TECHNOLOGIES YOU’LL SEE BY 2015 - 2050

Futurists can dish out some exciting and downright scary visions for the future of machines and science that either enhance or replace activities and products near and dear to us.

Being beamed from one location to another by teleportation was supposed to be right around the corner/in our lifetime/just decades away, but it hasn't become possible yet. Inventions like the VCR that were once high tech -- and now aren't -- proved challenging for some: The VCR became obsolete before many of us learned how to program one. And who knew that working with atoms and molecules would become thefuture of technology? The futurists, of course.

Forecasting the future of technology is
Read More...
for dreamers who hope to innovate better tools -- and for the mainstream people who hope to benefit from the new and improved. Many inventions are born in the lab and never make it into the consumer market, while others evolve beyond the pace of putting good regulations on their use.
Next, we'll take a look at some sound-loving atoms, tiny tools for molecules, huge bunches of data and some disgruntled bands of people who may want to set all of this innovation back with the stroke of a keyboard.

Here you can follow and can be seen in the following description...

1. Amazon launches "Echo" speaker with interactive AI

Online retail giant Amazon has unveiled a new hi-tech speaker system with a wide range of interactive features. Called Echo, the cylindrical device is controlled by your voice, activated by a special "wake word" and uses far-field listening to hear from anywhere in the room. It can provide real-time information, music, news, weather, a timer/alarm, and many more services – even telling jokes. Crisp vocals with dynamic bass are fine-tuned to deliver an immersive sound from 360° omni-directional speakers.

With an always-on connection, it uses the cloud to continually learn and increase functionality over time – adapting to speech patterns, vocabulary and users' personal preferences. For now, Echo is only available to those with an invitation, but you can request an invite on its product page. It is currently priced at $199, but Prime members can obtain it for $99 for a limited time. Although its technology appears impressive, to some people it might seem rather Orwellian. Let us know your opinion in the comments below.

2. DARPA circuit achieves speed of 1 terahertz (THz)

The fastest ever integrated circuit has been announced by DARPA – achieving one terahertz (1012 Hz), or a trillion cycles per second.


Guinness World Records has officially recognised DARPA's Terahertz Electronics program for creating the fastest solid-state amplifier integrated circuit ever measured. The ten-stage common-source amplifier operates at a speed of one terahertz (1,000,000,000,000 Hz), or one trillion cycles per second — 150 billion cycles faster than the existing world record of 850 gigahertz set in 2012.

“This breakthrough could lead to revolutionary technologies such as high-resolution security imaging systems, improved collision-avoidance radar, communications networks with many times the capacity of current systems and spectrometers that could detect potentially dangerous chemicals and explosives with much greater sensitivity,” said Dev Palmer, DARPA program manager.

Developed by Northrop Grumman Corporation, the Terahertz Monolithic Integrated Circuit (TMIC) exhibits power gains several orders of magnitude beyond the current state of the art, using a super-scaled 25 nanometer gate-length. Gain, which is measured logarithmically in decibels, similar to how earthquake intensity is measured on the Richter scale, describes the ability of an amplifier to increase the power of a signal from the input to the output. The Northrop Grumman TMIC showed a measured gain of nine decibels at 1.0 terahertz and 10 decibels at 1.03 terahertz. By contrast, current smartphone technology operates at one to two gigahertz and wireless networks at 5.7 gigahertz.
“Gains of six decibels or more start to move this research from the laboratory bench to practical applications — nine decibels of gain is unheard of at terahertz frequencies” said Palmer. “This opens up new possibilities for building terahertz radio circuits.”

For years, researchers have been looking to exploit the tremendously high-frequency band beginning above 300 gigahertz where the wavelengths are less than one millimetre. The terahertz level has proven to be somewhat elusive though, due to a lack of effective means to generate, detect, process and radiate the necessary high-frequency signals.

Current electronics using solid-state technologies have largely been unable to access the sub-millimetre band of the electromagnetic spectrum due to insufficient transistor performance. To address the “terahertz gap,” engineers have traditionally used frequency conversion — converting alternating current at one frequency to alternating current at another frequency — to multiply circuit operating frequencies up from millimetre-wave frequencies. This approach, however, restricts the output power of electrical devices and adversely affects signal-to-noise ratio. Frequency conversion also increases device size, weight and power requirements.

DARPA has made a series of strategic investments in terahertz electronics through its HiFIVE, SWIFT and TFAST programs. Each program has built on the successes of the previous one, providing the foundational research necessary for frequencies to reach the terahertz threshold.

3. 3D printer is 10 times faster than current models

Hewlett-Packard (HP) has unveiled a 3D printer that it claims will be 10 times faster than current models.


HP has introduced its vision for the future of computing and 3D printing by unveiling its new "Blended Reality" ecosystem. Designed to break down the barriers between the digital and physical worlds, this ecosystem is underpinned by two key advancements:
  • HP Multi Jet Fusion: A revolutionary technology engineered to resolve critical gaps in the combination of speed, quality and cost, and deliver on the potential of 3D printing.
  • Sprout by HP: A first-of-its-kind Immersive Computing platform that will redefine the user experience and that creates a foundation for future immersive technologies.
"We are on the cusp of a transformative era in computing and printing," said Dion Weisler, executive vice president, Printing & Personal Systems (PPS). "Our ability to deliver Blended Reality technologies will reduce the barriers between the digital and physical worlds, enabling us to express ourselves at the speed of thought – without filters, without limitations. This ecosystem opens up new market categories that can define the future, empowering people to create, interact and inspire like never before."
"As we examined the existing 3D print market, we saw a great deal of potential but also saw major gaps in the combination of speed, quality and cost," said Stephen Nigro, vice president of Inkjet and Graphic Solutions at HP. "HP Multi Jet Fusion is designed to transform manufacturing across industries by delivering on the full potential of 3D printing with better quality, increased productivity, and break-through economics."
Multi Jet Fusion is built on HP Thermal Inkjet technology and features a unique synchronous architecture that significantly improves the commercial viability of 3D printing and has the potential to change the way we think about manufacturing.
  • 10 times faster: Images entire surface areas versus one point at a time to achieve breakthrough functional build speeds, 10 times faster than the fastest technology in the market today.
  • New levels of quality, strength and durability: Multi-agent printing process utilising HP Thermal Inkjet arrays that simultaneously apply multiple liquid agents to produce best-in-class quality that combines greater accuracy, resiliency and uniform part strength in all three axis directions.
  • Accuracy and detail: Capable of delivering fully functional parts with more accuracy, finer details and smooth surfaces, and able to manipulate part and material properties, including form, texture, friction, strength, elasticity, electrical, thermal properties and more.
  • Achieves breakthrough economics: Unifies and integrates various steps of the print process to reduce running time, cost, energy consumption and waste to significantly improve 3D printing economics.

Sprout – the first product available in HP's Blended Reality ecosystem – combines the power of an advanced desktop computer with an immersive, natural user interface to create a new computing experience. As shown in the image above, this puts a scanner, depth sensor, hi-resolution camera and projector into a single device, allowing users to take physical items and seamlessly merge them into a digital workspace. The system also delivers an unmatched collaboration platform, allowing users in multiple locations to collaborate on and manipulate a single piece of digital content in real-time.

"We live in a 3D world, but today we create in a 2D world on existing devices," said Ron Coughlin, senior vice president, Consumer PC & Solutions, HP. "Sprout by HP is a big step forward in reimagining the boundaries of how we create and engage with technology to allow users to move seamlessly from thought to expression."

Together, HP says these advancements have the potential to revolutionise production and offer small businesses a new way to produce goods and parts for customers. HP aims to invite open collaboration and partnerships in 2015 to further develop its 3D print system, with general consumer availability in the second half of 2016.

4. Breakthrough in creating DNA-based electrical circuits

An international team has announced "the most significant breakthrough in a decade" toward developing DNA-based electrical circuits.


The central technological revolution of the 20th century was the development of computers, leading to the communication and Internet era. The main measure of this evolution has been miniaturisation: making machines smaller. A computer with the memory of the average laptop today was the size of a tennis court in the 1970s. Yet while scientists made great strides in reducing the size of individual components through microelectronics, they have been less successful at reducing the distance between transistors, the main element of our computers. These spaces between transistors have been much more challenging and extremely expensive to miniaturise – an obstacle that limits the future development of computers.

Molecular electronics, which uses molecules as building blocks for the fabrication of electronic components, was seen as the ultimate solution to the miniaturisation challenge. To date, however, no one has actually been able to make complex electrical circuits using molecules. The only known molecules that can be pre-designed to self-assemble into complex miniature circuits, which could in turn be used in computers, are DNA molecules. Nevertheless, nobody has so far been able to demonstrate reliably and quantitatively the flow of electrical current through long DNA molecules.

Now, an international group led by Prof. Danny Porath, at the Hebrew University of Jerusalem, reports reproducible and quantitative measurements of electricity flow through long molecules made of four DNA strands. The research, which could re-ignite interest in the use of DNA-based wires and devices in the development of programmable circuits, appears in the journal Nature Nanotechnology under the title "Long-range charge transport in single G-quadruplex DNA molecules."

Prof. Porath is affiliated with the Hebrew University's Institute of Chemistry and its Centre for Nanoscience and Nanotechnology. The molecules were produced by the group of Alexander Kotlyar from Tel Aviv University, who has been collaborating with Porath for 15 years. The measurements were performed mainly by Gideon Livshits, a PhD student in the Porath group. The research was carried out in collaboration with groups from Denmark, Spain, the US, Italy and Cyprus.

According to Prof. Porath, "This research paves the way for implementing DNA-based programmable circuits for molecular electronics, which could lead to a new generation of computer circuits that can be more sophisticated, cheaper and simpler to make."

5. Wi-Fi up to five times faster coming in 2015

Samsung Electronics has developed a new way of transmitting Wi-Fi data five times faster than was previously possible. The new technology is expected to be available in consumer devices as early as 2015.


If you've been to a cafe or other public place recently and been frustrated at the slow speed of Wi-Fi, a new breakthrough by Samsung Electronics may soon change that. Researchers at the company have this week achieved the development of 60GHz Wi-Fi allowing transfer rates of 4.6Gbps, or 575MB per second. That is 5.3 times faster than the previous maximum speed for consumer devices (866Mbps, or 108MB per second).

Today's generation of Wi-Fi uses the 2.4Ghz and 5Ghz areas of the radio spectrum. The 60GHz band is currently unlicensed and offers major potential, but previous attempts to exploit it have failed to send data over significant distances, due to path loss and weak penetration properties. Samsung has overcome these issues through a combination of millimetre-wave circuit design, a high performance modem and wide-coverage beam-forming antenna. This eliminates co-channel interference, regardless of the number of devices using the same network.

Commercialisation is expected in 2015, with Samsung planning integration into a wide variety of products – including audio visual, medical devices and telecommunications equipment. It will also help to spur the Internet of Things.
“Samsung prides itself at being of the forefront of technology innovation, and is delighted to have overcome the barriers to the commercialisation of 60GHz millimetre-wave band Wi-Fi technology,” said Paul Templeton, General Manager of Samsung Networks UK. “This breakthrough has opened the door to exciting possibilities for Samsung’s next-generation devices, and has also changed the face of the future development of Wi-Fi technology, promising innovations that were not previously within reach.”
To give an idea of the speed: a 1GB movie will take less than three seconds to transfer between devices, while uncompressed high-definition videos could easily be streamed from mobile devices to TVs in real-time without any delay.

6. 512GB SD card announced by SanDisk

SanDisk has revealed a 512GB SD card, the highest storage capacity ever seen in this form factor.

SanDisk yesterday launched the 512GB Extreme PRO SDXC UHS-I, the world’s highest capacity SD card and the first to reach 512GB, or half a terabyte. This new offering is designed to meet the demands of industry professionals who require the most advanced gear available for shooting 4K Ultra HD (3840x2160p) video, Full HD video (1920x1080) and high-speed burst mode photography.

“As an industry leader, SanDisk continues to push the boundaries of technology to provide customers with the innovative, reliable, high-performance solutions they have come to expect from us,” said Dinesh Bahal, vice president of product marketing. “4K Ultra HD is an example of a technology that is pushing us to develop new storage solutions capable of handling massive file sizes. The 512GB SanDisk Extreme PRO SDXC UHS-I card is a tremendous advancement that enables professionals to reliably store more content on a single card than ever before.”

Since the first 1GB SD card in 2004, storage capacities have grown exponentially and this new 512GB card represents a 500-fold increase in a decade – yet maintains the same size form factor. It delivers write speeds up to 90MB/s and transfer speeds up to 95 MB/s. The card is also temperature-proof (withstanding between -40ºC and 85ºC), waterproof, shockproof and X-ray proof. The product will initially go on sale for $800 (£490), but this cost is likely to decline rapidly in the months and years ahead.
The same exponential trend has been witnessed in the smaller-sized microSD format. In February this year, SanDisk revealed the first microSD to reach 128GB of storage capacity.

7. Long-distance virtual telepathy is demonstrated

Direct brain-to-brain communication has been demonstrated in humans located 5,000 miles apart via the Internet.


In a first-of-its-kind study, an international team of neuroscientists and robotics engineers have demonstrated the viability of direct brain-to-brain communication in humans. Recently published in PLOS ONE, the highly novel findings describe the successful transmission of information via the Internet between the intact scalps of two human subjects – located 5,000 miles apart.

"We wanted to find out if one could communicate directly between two people by reading out the brain activity from one person and injecting brain activity into the second person, and do so across great physical distances by leveraging existing communication pathways," explains co-author Alvaro Pascual-Leone, PhD, Director of the Berenson-Allen Center for Noninvasive Brain Stimulation at Beth Israel Deaconess Medical Center (BIDMC) and Professor of Neurology at Harvard Medical School. "One such pathway is, of course, the Internet, so our question became, 'Could we develop an experiment that would bypass the talking or typing part of Internet and establish direct brain-to-brain communication between subjects located far away from each other in India and France?'"
It turned out the answer was "yes."

In the neuroscientific equivalent of instant messaging, Pascual-Leone and his colleagues successfully transmitted the words "hola" and "ciao" in a computer-mediated brain-to-brain transmission, from a location in India to a location in France, using internet-linked electroencephalogram (EEG) and robot-assisted and image-guided transcranial magnetic stimulation (TMS) technologies.


Previous studies on EEG-based brain-computer interaction (BCI) have typically made use of communication between a human brain and computer. In these studies, electrodes attached to a person's scalp record electrical currents in the brain as a person realises an action-thought, such as consciously thinking about moving the arm or leg. The computer then interprets that signal and translates it to a control output, such as a robot or wheelchair.

But, in this new study, the research team added a second human brain on the other end of the system. Four healthy participants, aged 28 to 50, participated in the study. One of the four subjects was assigned to the brain-computer interface (BCI) branch and was the sender of the words; the other three were assigned to the computer-brain interface (CBI) branch of the experiments and received the messages and had to understand them.

Using EEG, the research team first translated the greetings "hola" and "ciao" into binary code, then emailed the results from India to France. There a computer-brain interface transmitted the message to the receiver's brain through non-invasive brain stimulation. The subjects experienced this as phosphenes, flashes of light in their peripheral vision. The light appeared in numerical sequences that enabled the receiver to decode the information in the message, and while the subjects did not report feeling anything, they did correctly receive the greetings.
A second similar experiment was conducted between people in Spain and France, the end result being a total error rate of just 15 percent, 11 percent on the decoding end and five percent on the initial coding side.

"By using advanced precision neurotechnologies including wireless EEG and robotised TMS, we were able to directly and noninvasively transmit a thought from one person to another, without them having to speak or write," says Pascual-Leone. "This in itself is a remarkable step in human communication, but being able to do so across a distance of thousands of miles is a critically important proof-of-principle for the development of brain-to-brain communications. We believe these experiments represent an important first step in exploring the feasibility of complementing or bypassing traditional language-based or motor-based communication."

8. In our digital world, are young people losing the ability to read emotions?

Children's social skills may be declining as they have less time for face-to-face interaction due to their increased use of digital media, according to a psychological study by the University of California, Los Angeles (UCLA).


UCLA scientists found that sixth-graders who went five days without even glancing at a smartphone, television or other digital screen did substantially better at reading human emotions than sixth-graders from the same school who continued to spend hours each day looking at their electronic devices.
“Many people are looking at the benefits of digital media in education, and not many are looking at the costs,” said Patricia Greenfield, a distinguished professor of psychology at UCLA College and senior author of the study. “Decreased sensitivity to emotional cues — losing the ability to understand the emotions of other people — is one of the costs. The displacement of in-person social interaction by screen interaction seems to be reducing social skills.”

Researchers studied two sets of sixth-graders from a Southern California public school: 51 who lived together for five days at the Pali Institute, a nature and science camp about 70 miles east of Los Angeles, and 54 others from the same school. The camp doesn’t allow students to use electronic devices — a policy that many students found to be challenging for the first couple of days. Most adapted quickly, however, according to camp counsellors.

At the beginning and end of the study, both groups were evaluated on their ability to recognise people’s emotions in photos and videos. The students were shown 48 pictures of faces that were happy, sad, angry or scared, and asked to identify their feelings. They also watched videos of actors interacting with one another and were instructed to describe the characters’ emotions. In one scene, students take a test and submit it to a teacher; one of the students is confident and excited, the other is anxious. In another scene, one student is saddened after being excluded from a conversation.
The children who had been at the nature camp improved significantly over the five days in their ability to read facial emotions and other non-verbal cues to emotion, compared with the students who continued to use their media devices.

Researchers tracked how many errors the students made when attempting to identify the emotions in the photos and videos. When analysing photos, for example, those at the camp made an average of 9.41 errors at the end of the study, down from 14.02 at the beginning. The students who didn’t attend the camp recorded a significantly smaller change. For the videos, the students who went to camp improved significantly, while the scores of the students who did not attend camp showed no change. The findings applied equally to both boys and girls.

“You can’t learn non-verbal emotional cues from a screen in the way you can learn it from face-to-face communication,” said Yalda Uhls, lead author and senior researcher with the UCLA’s Children’s Digital Media Center, Los Angeles. “If you’re not practicing face-to-face communication, you could be losing important social skills.”

Students participating in the study reported that they text, watch television and play video games for an average of four-and-a-half hours on a typical school day. Some surveys have found that the figure is even higher nationally. Greenfield considers the results significant, given that they occurred after only five days. The implications of the research are that people need more face-to-face interaction — and that even when people use digital media for social interaction, they’re spending less time developing social skills and learning to read non-verbal cues.

“We’ve shown a model of what more face-to-face interaction can do,” Greenfield said. “Social interaction is needed to develop skills in understanding the emotions of other people.”
Emoticons are a poor substitute for face-to-face communication, Uhls concluded: “We are social creatures. We need device-free time.”
The research will appear in the October print edition of Computers in Human Behavior and is already published online.

9. The Internet of Things: A Trillion Dollar Market

The Internet of Things is a new paradigm that will revolutionise the world of computers – offering widespread automation and connectivity of devices, systems and services, including the emergence of Smart Grids. Over the next decade, it is forecast to mushroom into a trillion dollar market.
This slideshare presentation by Vala Afshar, Chief Marketing Officer at Extreme Networks, shows many applications that are already becoming available.

10. Computer program recognises emotions with

87% accuracy

Researchers in Bangladesh have designed a computer program able to accurately recognise users’ emotional states as much as 87% of the time, depending on the emotion.


Writing in the journal Behaviour & Information Technology, Nazmul Haque Nahin and his colleagues describe how their study combined – for the first time – two established ways of detecting user emotions: keystroke dynamics and text-pattern analysis.

To provide data for the study, volunteers were asked to note their emotional state after typing passages of fixed text, as well as at regular intervals during their regular (‘free text’) computer use. This provided researchers with data about keystroke attributes associated with seven emotional states (joy, fear, anger, sadness, disgust, shame and guilt). To help them analyse sample texts, the researchers made use of a standard database of words and sentences associated with the same seven emotional states.
After running a variety of tests, the researchers found that their new ‘combined’ results were better than their separate results; what’s more, the ‘combined’ approach improved performance for five of the seven categories of emotion. Joy (87%) and anger (81%) had the highest rates of accuracy.

This research is an important contribution to ‘affective computing’, a growing field dedicated to ‘detecting user emotion in a particular moment’. As the authors note – for all the advances in computing power, performance and size in recent years, a lot more can still be done in terms of their interactions with end users. “Emotionally aware systems can be a step ahead in this regard,” they write. “Computer systems that can detect user emotion can do a lot better than the present systems in gaming, online teaching, text processing, video and image processing, user authentication and so many other areas where user emotional state is crucial.”

While much work remains to be done, this research is an important step in making ‘emotionally intelligent’ systems that recognise users’ emotional states to adapt their music, graphics, content or approach to learning a reality.

11. Brain-like supercomputer the size of a postage stamp

Scientists at IBM Research have created a neuromorphic (brain-like) computer chip, featuring 1 million programmable neurons and 256 million programmable synapses.


IBM this week unveiled "TrueNorth" – the most advanced and powerful computer chip of its kind ever built. This neurosynaptic processor is the first to achieve one million individually programmable neurons, sixteen times more than the current largest neuromorphic chip. Designed to mimic the structure of the human brain, it represents a major departure from older computer architectures of the last 70 years. By merging the pattern recognition abilities of neurosynaptic chips with traditional system layouts, researchers aim to create "holistic computing intelligence".

Measured by device count, TrueNorth is the largest IBM chip ever fabricated, with 5.4 billion transistors at 28nm. Yet it consumes under 70 milliwatts while running at biological real time – orders of magnitude less power than a typical modern processor. This amazing feat is made possible because neurosynaptic chips are event driven, as opposed to the "always on" operation of traditional chips. In other words, they function only when needed, resulting in vastly less energy use and a much cooler temperature. It is hoped this combination of ultra-efficient power consumption and entirely new system architecture will allow computers to far more accurately emulate the brain.

TrueNorth is composed of 4,096 cores, with each of these modules integrating memory, computation and communication. The cores are distributed in a parallel, flexible and fault-tolerant grid – able to continue operating when individual cores fail, similar to a biological system. And – like a brain cortex – adjacent TrueNorth chips can be seamlessly tiled and scaled up. To demonstrate this scalability, IBM also revealed a 16-chip motherboard with 16 million programmable neurons: roughly equivalent to a frog brain.

Each of these "neurons" features 256 inputs, whereas the human brain averages 10,000. That may sound like a huge difference – but in the world of computers and technology, progress tends to be exponential. In other words, we could see machines as computationally powerful as a human brain within 10–15 years. The implications are staggering. When sufficiently scaled up, this new generation of "cognitive computers" could transform society, leading to a myriad of applications able to intelligently analyse visual, auditory, and multi-sensory data.

12. New technology can extract audio from visual data

Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analysing microscopic vibrations of objects depicted in video. In one set of experiments, they were able to recover intelligible speech from the vibrations of a crisp packet, photographed from 15 feet away through sound-proof glass.

see : https://www.youtube.com/watch?feature=player_embedded&v=FKXOucXB4a8

In other experiments, the researchers extracted useful audio signals from videos of aluminium foil, the surface of a glass of water, and even the leaves of a potted plant. Their findings are presented at this year’s SIGGRAPH, the world's largest conference on computer graphics and interactive techniques.
“When sound hits an object, it causes the object to vibrate,” says Abe Davis, a graduate student in electrical engineering and computer science at MIT and first author on the new paper. “The motion of this vibration creates a very subtle visual signal that’s usually invisible to the naked eye. People didn’t realise that this information was there.”

Reconstructing audio from video requires that the frequency of the video samples — the number of frames of video captured per second — be higher than the frequency of the audio signal. In some of their experiments, the researchers used a high-speed camera able to capture 2,000 to 6,000 frames per second. That’s much faster than the 60 frames per second possible with some smartphones, but well below the frame rates of the best commercial high-speed cameras, which can top 100,000 frames per second.

In other experiments, however, they used an ordinary digital camera. Because of a quirk in the design of most cameras’ sensors, the researchers were able to infer information about high-frequency vibrations even from video recorded at a standard 60 frames per second. While this audio reconstruction wasn’t as faithful as that with the high-speed camera, it may still be good enough to identify the gender of a speaker in a room; the number of speakers — and even, given accurate enough information about the acoustic properties of speakers’ voices — their identities.

The researchers’ technique has obvious applications in law enforcement and forensics, but Davis is more enthusiastic about the possibility of what he describes as a new kind of imaging: “We’re recovering sounds from objects. That gives us a lot of information about the sound that’s going on around the object, but it also gives us a lot of information about the object itself, because different objects are going to respond to sound in different ways.”


In their experiments, the researchers have been measuring the material, mechanical, and structural properties of objects based on motions less than a tenth of a micrometre in size. That corresponds to 1/5000th of a pixel in close-up images — but it's possible to infer motions smaller than a pixel by looking at the way a single pixel’s colour value fluctuates over time.

“This is new and refreshing. It’s the kind of stuff that no other group would do right now,” says Alexei Efros, an associate professor of electrical engineering and computer science at the University of California at Berkeley. “We’re scientists, and sometimes we watch these movies, like James Bond, and we think, ‘This is Hollywood theatrics. It’s not possible to do that. This is ridiculous.’ And suddenly, there you have it. This is totally out of some Hollywood thriller. You know that the killer has admitted his guilt because there’s surveillance footage of his potato chip bag vibrating.”

However, technology of this kind may raise concerns over privacy in the future — particularly with ongoing, exponential advances in screen resolution, computer power and sensing abilities. Imagine a miniaturised version, for instance, able to be incorporated into glasses or even bionic eyes. The use of surveillance dronesand high-definition CCTV will also increase greatly in the coming years. Looking at the more distant future, the algorithms will be orders of magnitude more accurate and detailed, possibly combined with X-ray camera vision to peer through walls and other intervening obstacles. Perhaps by then, we will enter a world in which privacy becomes a thing of the past.

13. A new data transfer record: 43 terabits per second

A team in Denmark has broken the world record for single fibre data transmission, achieving a transfer rate of 43 terabits per second over a distance of 41 miles (67 km). They also report a speed of 1 petabit (1000 terabits) when combining multiple lasers.


In 2009, a research group at the Technical University of Denmark (DTU) was the first to break the 1 terabit barrier for data transfer. Their record was shattered in 2011, when the Karlsruhe Institute of Technology in Germany achieved 26 terabits per second. Now, DTU have regained the title, demonstrating 43 terabits per second (Tbps) through a single optical fibre. This is fast enough to download a 1GB file in about 0.0002 seconds – or the entire contents of a 1TB hard drive in 0.2 seconds.

The Danish team's effort may seem almost excessive, to the point of comedy. However, current trends show that insanely fast transfer speeds like this will be necessary in the relatively near future. Like a digital explosion, the Internet continues to expand and grow exponentially – doubling in size every two years. Improvements in video quality and image resolution mean the amount of data appearing online is mushrooming to enormous proportions, while at the same time, billions more people are gaining access to the web.

This also requires energy which currently generates about two percent of CO2 emissions. Therefore, it is essential to identify solutions for the Internet that make significant reductions in power consumption while simultaneously expanding the bandwidth.
DTU's researchers achieved their latest record by using a new type of optical fibre borrowed from the Japanese telecoms giant NNT. This type of fibre contains seven cores (glass threads) instead of the single core used in standard fibres, making it possible to transfer even more data. Despite the fact that it comprises seven cores, the new fibre does not take up any more space than the standard version.
As to when speeds in the tens of terabits range might be affordable to mainstream consumers, we reckon sometime in the 2030s.

14. Project Adam: a new deep-learning system

Developed by Microsoft, Project Adam is a new deep-learning system modelled after the human brain that has greater image classification accuracy and is 50 times faster than other systems in the industry. The goal of Project Adam is to enable software to visually recognise any object. This is being marketed as a competitor to Google's Brain project, currently being worked on by Ray Kurzweil.

see : http://msrvideo.vo.msecnd.net/rmcvideos/220709/220709.mp4

15. AMD plans for 25x efficiency gains by 2020

AMD has announced its goal to deliver a 25 times improvement in the energy efficiency of its Accelerated Processing Units (APUs) by 2020.


AMD's plans, including innovations that will produce the expected efficiency gains, were presented by Chief Technology Officer Mark Papermaster during a keynote at the China International Software and Information Service Fair (CISIS) in Dalian, China. The "25X20" target is a substantial increase compared to the prior six years (2008 to 2014), during which time AMD improved the typical-use energy efficiency of its products by around 10x.

Worldwide, three billion personal computers use more than one percent of all energy consumed annually, and 30 million computer servers use an additional 1.5 percent of all electricity consumed at an annual cost of $14 billion to $18 billion USD. Expanded use of the Internet, mobile devices, and interest in cloud-based video and audio content in general is expected to result in all of those numbers increasing in future years.

"Creating differentiated low-power products is a key element of our business strategy, with an attending relentless focus on energy efficiency," said Papermaster. "Through APU architectural enhancements and intelligent power efficient techniques, our customers can expect to see us dramatically improve the energy efficiency of our processors during the next several years. Setting a goal to improve the energy efficiency of our processors 25 times by 2020 is a measure of our commitment and confidence in our approach."

"The energy efficiency of information technology has improved at a rapid pace since the beginning of the computer age, and innovations in semiconductor technologies continue to open up new possibilities for higher efficiency," said Dr. Jonathan Koomey, research fellow at the Steyer-Taylor Centre for Energy Policy and Finance at Stanford University. "AMD has steadily improved the energy efficiency of its mobile processors, having achieved greater than a 10-fold improvement over the last six years in typical-use energy efficiency. AMD's focus on improving typical power efficiency will likely yield significant consumer benefits substantially improving real-world battery life and performance for mobile devices. AMD's technology plans show every promise of yielding about a 25-fold improvement in typical-use energy efficiency for mobile devices over the next six years, a pace that substantially exceeds historical rates of growth in peak output energy efficiency. This would be achieved through both performance gains and rapid reductions in the typical-use power of processors. In addition to the benefits of increased performance, the efficiency gains help to extend battery life, enable development of smaller and less material intensive devices, and limit the overall environmental impact of increased numbers of computing devices."

Moore's Law states that the number of transistors capable of being built in a given area doubles roughly every two years. Dr. Koomey's research demonstrates that historically, energy efficiency of processors has closely tracked the rate of improvement predicted by Moore's Law. From 2014-2020, however, AMD expects its energy efficiency achievements to outpace the historical trend of Moore's Law by at least 70 percent.

Heterogeneous System Architecture (HSA), for example, can combine CPU and GPU cores along with special purpose accelerators and video encoders on the same chip, in the form of APUs. This innovation saves energy by eliminating the connections between discrete chips, and reducing the number of compute cycles by treating the CPU and GPU as peers – seamlessly shifting workloads to the optimal processing component.
Intelligent, real-time power management offers another potential efficiency gain. Most computing operation is characterised by "idle time" – the interval between keystrokes, touch inputs or time reviewing displayed content. Executing tasks as quickly as possible to hasten a return to idle, and then minimising the power used at idle is extremely important for managing energy consumption. 

Most consumer-oriented tasks such as web browsing, document editing, and photo editing benefit from this "race to idle" behaviour. Accelerated Processing Units (APUs) can perform real-time analysis on the workload and applications – dynamically adjusting clock speed to achieve optimal throughput rates, then dropping back into low-power idle mode.

Future innovations such as inter-frame power gating, per-part adaptive voltage, voltage islands, further integration of system components, and other techniques still in the development stage should yield additional gains in energy efficiency.

16. Hybrid circuit material could replace silicon

Researchers have overcome a major issue in carbon nanotube technology by developing a flexible, energy-efficient hybrid circuit combining carbon nanotube thin film transistors with other thin film transistors. This hybrid could take the place of silicon as the traditional transistor material used in electronic chips, since carbon nanotubes are more transparent, flexible, and can be processed at a lower cost.


Prof. Chongwu Zhou – along with graduate students from the University of Southern California – developed this energy-efficient circuit by integrating carbon nanotube (CNT) thin film transistors (TFT) with thin film transistors comprised of indium, gallium and zinc oxide (IGZO).

“I came up with this concept in January 2013,” said Dr. Zhou, who works in the Department of Electrical Engineering. “Before then, we were working hard to try to turn carbon nanotubes into n-type transistors and then one day, the idea came to me. Instead of working so hard to force nanotubes to do something that they are not good for, why don’t we just find another material which would be ideal for n-type transistors – in this case, IGZO – so we can achieve complementary circuits?”

Carbon nanotubes are so small that they can only be viewed through a scanning electron microscope. This hybridisation of carbon nanotube thin films and IGZO thin films was achieved by combining their types, p-type and n-type, respectively, to create circuits that can operate complimentarily, reducing power loss and increasing efficiency. The inclusion of IGZO thin film transistors was necessary to provide power efficiency to increase battery life. If only carbon nanotubes had been used, then the circuits would not be power-efficient. By combining the two materials, their strengths have been joined and their weaknesses hidden.

Zhou likened the coupling of carbon nanotube TFTs and IGZO TFTs to the Chinese philosophy of yin and yang: “It’s like a perfect marriage. We are very excited about this idea of hybrid integration and we believe there is a lot of potential for it.”

Potential applications for this kind of integrated circuitry are numerous – including Organic Light Emitting Diodes (OLEDs), radio frequency identification (RFID) tags, sensors, wearable electronics, and flash memory devices. Even heads-up displays on vehicle dashboards could soon be a reality.

The new technology also has major medical implications. Currently, memory used in computers and phones is made with silicon substrates, the surface on which memory chips are built. To obtain medical information from a patient such as heart rate or brainwave data, stiff electrode objects are placed on several fixed locations on the patient’s body. With this new hybridised circuit, however, electrodes could be placed all over the patient’s body with just a single large but flexible object.

With this development, Zhou and his team have circumvented the difficulty of creating n-type carbon nanotube TFTs and p-type IGZO TFTs by creating a hybrid integration of p-type carbon nanotube TFTs and n-type IGZO TFTs and demonstrating a large-scale integration of circuits. As a proof of concept, they achieved a scale ring oscillator consisting of over 1,000 transistors. Up to this point, all carbon nanotube-based transistors had a maximum number of 200 transistors.

“We believe this is a technological breakthrough, as no one has done this before,” said Haitian Chen, research assistant and electrical engineering PhD student at USC. “This gives us further proof that we can make larger integrations so we can make more complicated circuits for computers and circuits.”
The next step for Zhou and his team will be to build more complicated circuits using a CNT and IGZO hybrid that achieves more complicated functions and computations, as well as to build circuits on flexible substrates.

“The possibilities are endless, as digital circuits can be used in any electronics,” Chen said. “One day we’ll be able to print these circuits as easily as newspapers.”
Zhou and Chen believe that carbon nanotube technology, including this new CNT-IGZO hybrid, will be commercialized in the next 5-10 years.
“I believe that this is just the beginning of creating hybrid integrated solutions,” said Zhou. “We will see a lot of interesting work coming up.”

Their latest work is published in Nature Communications.

17. Turing Test passed? Researchers claim breakthrough in artificial intelligence

Researchers are claiming a major breakthrough in artificial intelligence with a machine program that can pass the famous Turing Test.


At the Royal Society in London yesterday, an event called Turing Test 2014 was organised by the University of Reading. This involved a chat program known asEugene being presented to a panel of judges and trying to convince them it was human. These judges included the actor Robert Llewellyn – who played robot Kryten in sci-fi comedy TV series Red Dwarf – and Lord Sharkey, who led a successful campaign for Alan Turing's posthumous pardon last year. During this competition, which saw five computers taking part, Eugene fooled 33% of human observers into thinking it was a real person as it claimed to be a 13-year-old boy from Odessa in Ukraine.

In 1950, British mathematician and computer scientist Alan Turing published his seminal paper, "Computing Machinery and Intelligence", in which he proposed the now-famous test for artificial intelligence. Turing predicted that by the year 2000, machines with 10 GB of storage would be able to fool 30% of human judges in a five-minute test, and that people would no longer consider the phrase "thinking machine" contradictory.

In the years since 1950, the test has proven both highly influential and widely criticised. A number of breakthroughs have emerged in recent times from groups claiming to have satisfied the criteria for "artificial intelligence". We have seen Cleverbot, for example, and IBM's Watson, as well as gaming bots and the CAPTCHA-solving Vicarious. It is therefore easy to be sceptical about whether Eugene represents something genuinely new and revolutionary.

Professor Kevin Warwick (who also happens to be the world's first cyborg), comments in a press release from the university: "Some will claim that the Test has already been passed. The words 'Turing Test' have been applied to similar competitions around the world. However, this event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing's Test was passed for the first time on Saturday."

Eugene's creator and part of the development team, Vladimir Veselov, said as follows: "Eugene was 'born' in 2001. Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything. We spent a lot of time developing a character with a believable personality. This year, we improved the 'dialog controller' which makes the conversation far more human-like when compared to programs that just answer questions. Going forward, we plan to make Eugene smarter and continue working on improving what we refer to as 'conversation logic'."


Is the Turing Test a reliable indicator of intelligence? Who gets to decide the figure of 30% and what is the significance of this number? Surely imitation and pre-programmed replies cannot qualify as "understanding"? These questions and many others will be asked in the coming days, just as they have been asked following similar breakthroughs in the past. To gain a proper understanding of intelligence, we will need to reverse engineer the brain – something which is very much achievable in the next decade, based on current trends.

Regardless of whether Eugene is a bona fide AI, computing power will continue to grow exponentially in the coming years, with major implications for society in general. Benefits may include a 50% reduction in healthcare costs, as software programs are used for big data management to understand and predict the outcomes of treatment. Call centre staff, already competing with virtual employees today, could be almost fully automated in the 2030s, with zero waiting times for callers trying to seek help. Self-driving cars and other forms of AI could radically reshape our way of life.

Downsides to AI may include a dramatic rise in unemployment as humans are increasingly replaced by machines. Another big area of concern is security, as Professor Warwick explains: "Having a computer that can trick a human into thinking that someone – or even something – is a person we trust is a wake-up call to cybercrime. The Turing Test is a vital tool for combatting that threat. It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true... when in fact it is not."
Further into the future, AI will gain increasingly mobile capabilities, able to learn and become aware of the physical world. No longer restricted to the realms of software and cyberspace, it will occupy hardware that includes machines literally indistinguishable from real people. By then, science fiction will have become reality and our civilisation will enter a profound, world-changing epoch that some have called a technological singularity. If Ray Kurzweil's ultimate prediction is to be believed, our galaxy and perhaps the entire universe may become saturated with intelligence, as formally lifeless rocks are converted into sentient matter.

18. Shatterproof screens to protect smartphones

Polymer scientists at the University of Akron in Ohio have developed a transparent electrode that could change the face of smartphones, literally, by making their displays shatterproof.


In a recently published paper, researchers show how a transparent layer of nanowire-based electrodes on a polymer surface could be extraordinarily tough and flexible, withstanding repeated scotch tape peeling and bending tests. This could revolutionise and replace conventional touchscreens, according to Yu Zhu, UA assistant professor of polymer science. Currently used coatings made of indium tin oxide (ITO) are more brittle, most likely to shatter, and increasingly costly to manufacture.

“These two pronounced factors drive the need to substitute ITO with a cost-effective and flexible conductive transparent film,” Zhu says, adding that the new film provides the same degree of transparency as ITO, yet offers greater conductivity. The novel film retains its shape and functionality after tests in which it has been bent 1,000 times. Due to its flexibility, the transparent electrode can be fabricated in economical, mass-quantity rolls.

“We expect this film to emerge on the market as a true ITO competitor,” Zhu says. “The annoying problem of cracked smartphone screens may be solved once and for all with this flexible touchscreen.”
The findings are published by the American Chemical Society’s journal ACS Nano in a study titled “A Tough and High-Performance Transparent Electrode from a Scalable and Transfer-Free Method.”

19. Intel announces the first 14 nanometre processor

At the Computex conference in Taipei, chipmaker Intel has revealed a fanless mobile PC reference design using the first of its next-generation 14nm "Broadwell" processors.


The 2 in 1 pictured here is a 12.5" screen that is just 7.2 mm thick with keyboard detached and weighs 670 grams. The Surface Pro 3 – for comparison – is 9.1 mm thick and weighs 800 grams. It includes a media dock that provides additional cooling for a burst of performance. The next-generation chip is purpose-built for 2 in 1s and will hit the market later in 2014. Called the Intel Core M, it will be the most energy-efficient Intel Core processor in the company's history with power usage cut by up to 45 percent, resulting in 60 percent less heat. The majority of designs based on this new chip are expected to be fanless, with up to 32 hours of battery life, offering both a lightning-fast tablet and razor-thin laptop.

Intel is also delivering innovation and performance for the most demanding PC users. During the conference, the company introduced its 4th generation Core i7 and i5 processor "K" SKU – the first from Intel to deliver four cores at up to 4 GHz base frequency. This desktop processor, built for enthusiasts, enables new levels of overclocking capability. Production shipments begin this month.
Intel also outlined progress towards a vision to deliver 3-D camera and voice recognition technologies to advance more natural, intuitive interaction with computing devices. The latest RealSense software development kit will be made available in the third quarter of 2014, providing opportunity for developers of all skill levels to create user interfaces.

Computer processors continue to get smaller, faster and cheaper thanks to Moore's Law – expanding the scale and potential for technology in everything from cloud computing and the Internet of Things, to mobile phones and wearable technology.

"The lines between technology categories are blurring as the era of integrated computing takes hold where form factor matters less than the experience delivered when all devices are connected to each other and to the cloud," said Renée James, Intel Corporation President. "Whether it's a smartphone, smart shirt, ultra-thin 2-in-1, or a new cloud service delivered to smart buildings outfitted with connected systems, together Intel and the Taiwan ecosystem have the opportunity to accelerate and deliver the value of a smart, seamlessly connected and integrated world of computing."

20. A breakthrough in quantum teleportation

Scientists have transferred data by quantum teleportation over a distance of 10 feet with zero percent error rate.


Teleporting people through space, as done in Star Trek, is impossible with our current knowledge of physics. Teleporting information is another matter, however, thanks to the extraordinary world of quantum mechanics. Researchers at Delft University of Technology in the Netherlands have succeeded in transferring the information contained in a qubit – the quantum equivalent of a classical bit – to a different quantum bit over a distance of three metres (10 feet), without the information having travelled through the intervening space. This was achieved with a zero percent error rate.
The breakthrough is a vital step towards a future quantum network for communication between ultra-fast quantum computers – a "quantum internet". Quantum computers will solve many important problems that even today's best supercomputers are unable to tackle. Furthermore, a quantum internet will enable completely secure information transfer, as eavesdropping will be fundamentally impossible in such a network. To achieve teleportation, researchers in this study made use of an unusual phenomenon known as entanglement.

"Entanglement is arguably the strangest and most intriguing consequence of the laws of quantum mechanics," argues the head of the research project, Prof. Ronald Hanson. "When two particles become entangled, their identities merge: their collective state is precisely determined, but the individual identity of each of the particles has disappeared. The entangled particles behave as one, even when separated by a large distance. The distance in our tests was three metres – but in theory, the particles could be on either side of the universe. Einstein didn't believe in this prediction and he called it 'spooky action at a distance'. Numerous experiments, on the other hand, agree with the existence of entanglement."

Using entanglement as a means of communication has been achieved in previous work by scientists – but the error rates have been so high as to make those methods impractical for real-world applications. In this new effort, Hanson has solved the error rate problem, bringing it down to zero. His team is the first to have succeeded in teleporting information accurately between qubits in different computer chips: "The unique thing about our method is that the teleportation is guaranteed to work 100%," he says. "The information will always reach its destination, so to speak. And, moreover, it also has the potential of being 100% accurate."

Hanson's team produce solid-state qubits using electrons in diamonds at very low temperatures and shooting them with lasers: "We use diamonds because 'mini prisons' for electrons are formed in this material whenever a nitrogen atom is located in the position of one of the carbon atoms. The fact that we're able to view these miniature prisons individually makes it possible to study and verify an individual electron and even a single atomic nucleus. We're able to set the spin (rotational direction) of these particles in a predetermined state, verify this spin and subsequently read out the data. We do all this in a material that can be used to make chips out of. This is important, as many believe that only chip-based systems can be scaled up to a practical technology," he explains.


Hanson plans to repeat the experiment this summer over a much larger distance of 1300m (4265 ft), using chips located in various buildings on the university campus. This experiment could be the first that meets the criteria of the "loophole-free Bell test", and could provide the ultimate evidence to disprove Einstein's rejection of entanglement. Various groups, including Hanson's, are currently striving to be the first to realise a loophole-free Bell test – considered the Holy Grail within quantum mechanics.
The results of this study are published this week in Science.

21. A breakthrough in real-time translated conversations

At the Code Conference in California, Microsoft has demonstrated Skype Translator – a new technology enabling cross-lingual conversations in real time. Resembling the "universal translator" from Star Trek, this feature will be available on Windows 8 by the end of 2014 as a limited beta. Microsoft has worked on machine translation for 15 years, and translating voice over Skype in real time had once been considered "a nearly impossible task." In the world of technology, however, miracles do happen. This video shows the software in action. According to CEO Satya Nadella, it does more than just automatic speech recognition, machine translation and voice synthesis: it can actually "learn" from different languages, through a brain-like neural net. When you consider that 300 million people are now connecting to Skype each month, making 2 billion minutes of conversation each day, the potential in terms of improved communication is staggering.

22. Microchip-like technology allows single-cell analysis

Using components similar to those that control electrons in microchips, researchers have designed a new device that can sort, store, and retrieve individual cells for study.


An international research team has developed a chip-like device that could be scaled up to sort and store hundreds of thousands of individual living cells in a matter of minutes. The chip is similar to random-access memory (RAM), but moves cells rather than electrons. It is hoped the cell-sorting system will revolutionise medical research by allowing the fast, efficient control and separation of individual cells, which could then be studied in vast numbers.

“Most experiments grind up a bunch of cells and analyse genetic activity by averaging the population of an entire tissue rather than looking at the differences between single cells within that population,” says Benjamin Yellen, associate professor at Duke University's Pratt School of Engineering. “That’s like taking the eye colour of everyone in a room and finding that the average colour is grey, when not a single person in the room has grey eyes. You need to be able to study individual cells to understand and appreciate small but significant differences in a similar population.”

Yellen and his collaborator – Cheol Gi Kim, from the Daegu Gyeongbuk Institute of Science and Technology (DGIST) in South Korea – printed thin electromagnetic components like those found on microchips onto a slide. These patterns create magnetic tracks and elements like switches, transistors and diodes to guide magnetic beads and single cells tagged with magnetic nanoparticles through a thin liquid film.

Like a series of small conveyer belts, localised rotating magnetic fields move the beads and cells along specific directions etched on a track, while built-in switches direct traffic to storage sites on the chip. The result is an integrated circuit that controls small magnetic objects much like the way electrons are controlled on computer chips.

see : https://www.youtube.com/watch?feature=player_embedded&v=hDttH_Fycu8

In their study, the engineers demonstrate a 3-by-3 grid of compartments that allow magnetic beads to enter but not leave. By tagging cells with magnetic particles and directing them to different compartments, the cells can be separated, sorted, stored, studied and retrieved.
In a random-access memory chip, similar logic circuits manipulate electrons on a nanometre scale, controlling billions of compartments in a square inch. Cells are much larger than electrons, however, which would limit the new devices to hundreds of thousands of storage spaces per square inch.
But Yellen and Kim say that’s still plenty small for their purposes.

“You need to analyse thousands of cells to get the statistics necessary to understand which genes are being turned on and off in response to pharmaceuticals or other stimuli,” said Yellen. “And if you’re looking for cells exhibiting rare behavior, which might be one cell out of a thousand, then you need arrays that can control hundreds of thousands of cells.”

“Our technology can offer new tools to improve our basic understanding of cancer metastasis at the single cell level, how cancer cells respond to chemical and physical stimuli, and to test new concepts for gene delivery and metabolite transfer during cell division and growth,” said Kim.
They now plan to demonstrate a larger grid of 8-by-8 or 16-by-16, then scale it up to hundreds of thousands of compartments with cells. If successful, their technology would lend itself well to manufacturing, giving scientists around the world access to single-cell experimentation.

“Our idea is a simple one,” said Kim. “Because it is a system similar to electronics and is based on the same technology, it would be easy to fabricate. That makes the system relevant to commercialisation.”
The study was published online yesterday in Nature Communications.

23. Nanotechnology breakthrough: single-atom magnet

Researchers have demonstrated, for the first time, the maximum theoretical limit of energy needed to control the magnetisation of a single atom. This fundamental work has major implications for magnetic research and future nanotechnology.


Magnetic devices like hard drives, magnetic random access memories (MRAMs), molecular magnets and quantum computers depend on the manipulation of magnetic properties. In an atom, magnetism arises from the spin and orbital momentum of its electrons. "Magnetic anisotropy" describes how an atom’s magnetic properties depend on the orientation of the electrons' orbits, relative to the structure of a material. It also provides directionality and stability to magnetisation. Publishing in Science, researchers led by Ecole Polytechnique Fédérale de Lausanne (EPFL) combine various experimental and computational methods to measure, for the first time, the energy needed to change the magnetic anisotropy of a single Cobalt atom.

Their methodology and findings could impact a range of fields – from studies of single atom and single molecule magnetism, to quantum computing and the design of spintronic device architectures.
In theory, every atom or molecule has the potential to be magnetic, since this depends on the movement of its electrons. Electrons move in two ways: spin, which can be loosely thought of as spinning around themselves; and orbit, which refers to an electron’s movement around the nucleus of its atom. Spin and orbital motion give rise to magnetisation, similar to an electric current circulating in a coil and producing a magnetic field. The spinning direction of the electrons therefore defines the direction of the magnetisation in a material.

The magnetic properties of a material have a certain "preference" or "stubbornness" towards a specific direction. This phenomenon is referred to as "magnetic anisotropy," and is described as the "directional dependence" of a material’s magnetism. Changing its "preference" requires a certain amount of energy. The total energy of a material’s magnetic anisotropy is a fundamental obstacle when it comes to downscaling of technology like MRAMs, computer hard drives and even quantum computers, which use different electron spin states as distinct information units, or "qubits".

The team at EPFL – in collaboration with scientists from ETH Zurich, the Paul Scherrer Institute and IBM Almaden Research Center – developed a method to determine the maximum possible magnetic anisotropy for a single Cobalt atom. This metal is widely used in permanent magnets, as well as in magnetic recording materials for data storage applications.


The researchers used a technique called "inelastic electron tunnelling spectroscopy" to probe the quantum spin states of a single cobalt atom bound to a layer of magnesium oxide (MgO), as shown in the illustration at the top of this page. This technique uses an atom-sized scanning tip, which allows the passage (or "tunnelling") of electrons to the cobalt atom. When electrons were tunnelled through by researchers, this transferred energy and induced changes in the spin properties.

The experiments revealed the maximum magnetic anisotropy energy of a single atom (~58 millielectron volts) and the longest spin lifetime for a single transition metal atom. When placed on the ultra-thin layer of magnesium oxide, these individual cobalt atoms were found to have triple the magnetism, atom for atom, than a layer made of pure cobalt. In addition, these single-atom magnets were very stable against external perturbations, which is a prerequisite for technological applications. These fundamental findings open the way for a better understanding of magnetic anisotropy and present a single-atom model system that could be used as a future "qubit".

"Quantum computing uses quantum states of matter – and magnetic properties are such a quantum state," says Harald Brune, from EPFL. "They have a life-time, and you can use the individual surface-adsorbed atoms to make qubits. Our system is a model for such a state. It allows us to optimise the quantum properties, and it is easier than previous ones, because we know exactly where the cobalt atom is in relation to the MgO layer."

"Miniaturisation is limited physically by the atomic structure of the material," said Professor Pietro Gambardella, ETH Zurich. "In our work, we have now shown that it is possible to create stable magnetic components out of single atoms; i.e. the smallest possible structure."

rewritten by http://www.setiawati1990.blogspot.com
source from http://www.futuretimeline.net/





0 komentar:

Post a Comment

Masukkan Komentar Anda . . .