Monday, July 15, 2013

2013 new computer trends

Using nanostructured glass, scientists at the University of Southampton have, for the first time, demonstrated the recording and retrieval processes of five dimensional digital data by femtosecond laser writing. The storage allows unprecedented parameters, including 360 TB/disc data capacity, thermal stability up to 1000°C and practically unlimited lifetime.



Coined as the 'Superman' memory crystal, as the glass memory has been compared to the "memory crystals" used in the Superman films, the data is recorded via self-assembled nanostructures created in fused quartz, which is able to store vast quantities of data for over a million years. The information encoding is realised in five dimensions: the size and orientation in addition to the three dimensional position of these nanostructures.

A 300 kb text file was successfully recorded in 5D using an ultrafast laser – producing extremely short and intense pulses of light. The file is written in three layers of nanostructured dots separated by five micrometres (one millionth of a metre). The self-assembled nanostructures change the way light travels through glass, modifying polarisation of light that can then be read by a combination of optical microscope and a polariser, similar to that found in Polaroid sunglasses. 

The research is led by Jingyu Zhang at the Optoelectronics Research Centre (ORC) and conducted under a joint project with Eindhoven University of Technology.

"We are developing a very stable and safe form of portable memory using glass, which could be highly useful for organisations with big archives," says Jingyu. "At the moment, companies have to back up their archives every five to ten years because hard-drive memory has a relatively short lifespan. Museums who want to preserve information or places like the national archives where they have huge numbers of documents, would really benefit."

Professor Peter Kazansky, the ORC's group supervisor: "It is thrilling to think that we have created the first document which will likely survive the human race. This technology can secure the last evidence of civilisation: all we've learnt will not be forgotten."
The team presented their paper at the Conference on Lasers and Electro-Optics (CLEO'13) in San Jose. They are now looking for industry partners to commercialise this ground-breaking new technology.






Futurologist, author and computer scientist James Martin was found dead on Monday 24th June, in the waters near his home on Agar's Island, Bermuda. British-born Dr Martin was a major inspiration for this website. He wrote over 100 books including Pulitzer-nominated The Wired Society (1977), famous for its remarkably accurate predictions about technology and the Internet. Another of his books, The Meaning of the 21st Century (2007) has been referenced in several places on Future Timeline. He was ranked as the fourth most influential person in computer science by Computerworld.
Dr Martin was recently involved in the Global Future 2045 congress as a speaker and presenter, opening the event with a visionary lecture on "Digital Darwinism". He described the "post brain map era" and foretold a coming "crunch" that is inevitable in our society. He spoke of how "By 2045 there will either be a global renaissance, or a collapse", and also mentioned in his lecture that "Technocracy will replace aristocracy in the future". He also spoke of climate change, overpopulation and high-tech wars.



In addition, he founded the Oxford Martin School, by donating $150 million to the University of Oxford and becoming the largest benefactor in its 900-year history. In a statement, the Oxford Martin School said he was "an inspiration to millions, an extraordinary intellect, with wide-ranging interests, boundless energy and an unwavering commitment to addressing the greatest challenges facing humanity."
Bermuda Police have said an investigation into Dr Martin's death is ongoing. But a spokesman added: "There does not appear to be any suspicious circumstances."







O3b, the world's first fiber-speed satellite network, is launching its first four satellites into orbit.

O3b started as a visionary idea six years ago in the jungles of Rwanda, to solve the challenge of limited affordable international connectivity. It has now become a state-of-the-art satellite network that will soon provide billions of people across Africa, Latin America, the Middle East, Asia and the Pacific access to fast and affordable internet for the first time. The name "O3b" stands for "[The] Other 3 Billion", referring to the population of the world where broadband Internet is not currently available without help. The company has backing from high profile names including Google and HSBC.
The first four O3b satellites have been built, tested and are now poised on top of a Soyuz rocket, waiting to be launched into orbit. O3b's ground systems around the world are in place ready to communicate with and operate these newly delivered satellites, which are due to be launched tomorrow at 18:54 UTC from the Guiana Space Centre. A live webcast will be available at ArianeSpace.tv.
Brian Holz, O3b Chief Technology Officer: "We are ready. The countdown has begun. In a few short hours, our satellites will be coming over the horizon for the first time. We are very close to launching a network that has the potential to change lives in very tangible ways and that is a tremendous feeling."
O3b's next four satellites will be launched in September and full operations will start in November. All eight satellites will be placed at an altitude of 8,063 km (5,009 mi), a Medium-Earth Orbit that is a quarter of the distance from Earth than traditional geostationary (GEO) telecommunications craft, which orbit some 36,000 km above the planet. This should substantially reduce the delay, or latency, of the signal as the voice or data traffic is routed via space. Each satellite will be equipped with 12 fully steerable Ka band antennas, with a throughput of 1.2 Gbit/s per beam – resulting in a total network capacity of 115 GBit/s. The company hopes to eventually put 20 satellites into orbit.
Worldwide, the number of Internet users currently stands at 2.5 billion, a figure expected to reach 5 billion by 2020thanks to exponential technology improvements.




The Persistent Close Air Support (PCAS) program aims to improve air-to-ground fire coordination, but could revolutionise military tech development and deployment as well.
Air-ground fire coordination – also known as Close Air Support or CAS – is a dangerous and difficult business. Pilots and dismounted ground agents must ensure they hit only the intended target using just voice directions and, if they’re lucky, a common paper map. It can often take up to an hour to confer, get in position and strike – time in which targets can attack first or move out of reach. To help address these challenges, DARPA recently awarded a contract for Phase II of its Persistent Close Air Support (PCAS) program to the Raytheon Company of Waltham, Mass.
PCAS aims to enable ground forces and combat aircrews to jointly select and employ precision-guided weapons from a diverse set of airborne platforms. The program seeks to leverage advances in computing and communications technologies to fundamentally increase CAS effectiveness, as well as improve the speed and survivability of ground forces engaged with enemy forces.
“Our goal is to make Close Air Support more precise, prompt and easy to coordinate under stressful operational conditions,” said Dan Patt, DARPA program manager. “We could use smaller munitions to hit smaller or moving targets, minimising the risk of friendly fire or collateral damage.”
While its tools have become more sophisticated, CAS has not fundamentally changed since World War I. To accelerate CAS capabilities well beyond the current technological state of the art, PCAS envisions an all-digital system that incorporates commercial IT products and models such as open interfaces, element modularity and mobile software applications.
PCAS designs currently include two main components, PCAS-Air and PCAS-Ground. PCAS-Air would consist of an internal guidance system, weapons and engagement management systems, and high-speed data transfer via Ethernet, existing aircraft wiring or wireless networks. Based on tactical information, PCAS-Air’s automated algorithms would recommend optimal travel routes to the target, which weapon to use on arrival and how best to deploy it. Aircrews could receive information either through hardwired interfaces or wirelessly via tablet computers.
PCAS-Air would inform ground forces through PCAS-Ground, a suite of technologies enabling improved mobility, situational awareness and communications for fire coordination. A HUD eyepiece wired to a tablet computer like that used in PCAS-Air would display tactical imagery, maps and other information, enabling ground forces to keep their eyes more on the target and less on a computer screen. 


Parts of PCAS-Ground are already in field trials that mark some of the first large-scale use of commercial tablets for air-ground fire coordination. From December 2012 through March 2013, PCAS deployed 500 Android tablets equipped with PCAS-Ground situational awareness software to units stationed in Afghanistan. The tablets provided warfighters with added capabilities including digital gridded reference graphics (GRGs), digital terrain elevation data and other mission planning and execution tools. In the air, in-flight GPS tracking enabled pilots and ground forces to locate their relative positions in real time. Field reports show that PCAS-Ground has replaced those units’ legacy paper maps, dramatically improving ground forces’ ability to quickly and safely coordinate air engagements.
One of the most potentially groundbreaking elements of PCAS is its Smart Rail, a modular system that would attach to standard external mounting rails on many common fixed- and rotor-wing aircraft. The Smart Rail would initially carry and perform engagement computations for the PCAS-Air components, but it would also enable quick, inexpensive installation of new piloting aids and new radios to communicate to ground agents. The plug-and-play system could accommodate legacy and future equipment with equal ease, and eventually could also be compatible with unmanned air vehicles (UAVs).
“The Smart Rail is an easy way to get digital air-ground coordination onto current and future aircraft,” Patt said. “Just as the USB revolutionised how we use IT-enabled devices, modular technologies like the Smart Rail could greatly reduce development time and costs for military technology and speed deployment of PCAS and other capabilities.”



Stanford Engineering's Center for Turbulence Research (CTR) has set a new record in computational science by successfully using a supercomputer with more than 1 million computing cores. This was done to solve a complex fluid dynamics problem – the prediction of noise generated by a supersonic jet engine.

Joseph Nichols, a research associate in the centre, worked on the newly installed Sequoia IBM Bluegene/Q system at Lawrence Livermore National Laboratories (LLNL). Sequoia recently topped the list of the world's most powerful supercomputers, boasting 1,572,864 compute cores (processors) and 1.6 petabytes of memory connected by a high-speed five-dimensional torus interconnect.
Because of Sequoia's impressive numbers of cores, Nichols was able to show for the first time that million-core fluid dynamics simulations are possible – and also to contribute to research aimed at designing quieter aircraft engines.
The physics of noise
The exhausts of high-performance aircraft at takeoff and landing are among the most powerful man-made sources of noise. For ground crews, even for those wearing the most advanced hearing protection available, this creates an acoustically hazardous environment. To the communities surrounding airports, such noise is a major annoyance and a drag on property values.
Understandably, engineers are keen to design new and better aircraft engines that are quieter than their predecessors. New nozzle shapes, for instance, can reduce jet noise at its source, resulting in quieter aircraft.
Predictive simulations – advanced computer models – aid in such designs. These complex simulations allow scientists to peer inside and measure processes occurring within the harsh exhaust environment that is otherwise inaccessible to experimental equipment. The data gleaned from these simulations are driving computation-based scientific discovery as researchers uncover the physics of noise.
More cores, more challenges
Parviz Moin, a Professor in the School of Engineering and Director of CTR: "Computational fluid dynamics (CFD) simulations, like the one Nichols solved, are incredibly complex. Only recently, with the advent of massive supercomputers boasting hundreds of thousands of computing cores, have engineers been able to model jet engines and the noise they produce with accuracy and speed."
CFD simulations test all aspects of a supercomputer. The waves propagating throughout the simulation require a carefully orchestrated balance between computation, memory and communication. Supercomputers like Sequoia divvy up the complex math into smaller parts so they can be computed simultaneously. The more cores you have, the faster and more complex the calculations can be.
And yet, despite the additional computing horsepower, the difficulty of the calculations only becomes more challenging with more cores. At the one-million-core level, previously innocuous parts of the computer code can suddenly become bottlenecks.



Ironing out the wrinkles
Over the past few weeks, Stanford researchers and LLNL computing staff have been working closely to iron out these last few wrinkles. This week, they were glued to their terminals during the first "full-system scaling" to see whether initial runs would achieve stable run-time performance. They watched eagerly as the first CFD simulation passed through initialisation then thrilled as the code performance continued to scale up to and beyond the all-important one-million-core threshold, and as the time-to-solution declined dramatically.
"These runs represent at least an order-of-magnitude increase in computational power over the largest simulations performed at the Center for Turbulence Research previously," said Nichols. "The implications for predictive science are mind-boggling."
A homecoming
The current simulations were a homecoming of sorts for Nichols. He was inspired to pursue a career in supercomputing as a high-school student when he attended a two-week summer program at Lawrence Livermore computing facility in 1994 sponsored by the Department of Energy. Back then, he worked on the Cray Y-MP, one of the fastest supercomputers of its time. "Sequoia is approximately 10 million times more powerful than that machine," Nichols noted.
The Stanford ties go deeper still. The computer code used in this study is named CharLES and was developed by former Stanford senior research associate, Frank Ham. This code utilises unstructured meshes to simulate turbulent flow in the presence of complicated geometry.
In addition to jet noise simulations, Stanford researchers are using the CharLES code to study advanced-concept scramjet propulsion systems, used in hypersonic flight at many times the speed of sound.


Jason John Bello
John Reymond Alabis
Mark Mana-ay
Ednil Loresto

of Info Tech 2B 




No comments: