Subscribe options

Select your newsletters:


Please enter your email address:

@

News & Media

Latest ITER Newsline

  • Real-time collaboration delivers for fusion computing

    A key computing system for ITER is now being trialled at the European tokamak JET, following collaboration betweenteams at the UK's Culham Centre for Fusion Ene [...]

    Read more

  • The balance of power

    It comes as no surprise that the French railway operator SNCF is the largest consumer of electricity in the country—it takes a lot of megawatts to power 500 sup [...]

    Read more

  • "Dummy" winding takes shape

    As orange lights flash and machines softly hum, layer one of a 'dummy' pancake winding (the building block of a poloidal field coil) is taking shape on the wind [...]

    Read more

  • As big (and heavy) as a whale

    It was pouring when the two 35-metre-long quench tanks were delivered to the ITER site at 2:12 a.m. on Thursday 24 November. And it was still raining heavily on [...]

    Read more

  • A passage to India

    108 days, 10,200 kilometres, 16 countries, and only two flat tires. These are the remarkable statistics of a no-less-remarkable journey: a father and son who tr [...]

    Read more

Of Interest

See archived articles

How to handle the Petabytes

-Sabina Griffith

 (Click to view larger version...)
When it was announced in 1985 that the American "Cray-2" supercomputer had achieved a capacity of one Gigaflop per second, even some scientists had to consult the dictionary. The term Giga is derived from the Greek—meaning giant—and is the abbreviation for one billion. A Gigaflop computer can perform one billion floating-point operations (Flop) per second.

In 1985, this was one thousand fold the capacity achievable with your home computer. Today, every mobile phone contains a Gigaflop processor. And while the "big bang" hunters at CERN are dealing with Petaflops (1015 calculations per second), the new kid on the large science block, the Square Kilometer Array (SKA) which will be built in south Africa and Australia, will require supercomputers that can digest data on the Exa scale. That is a 1 followed by 18 zeros.

The steep increase of computer memory known as Moore's Law is comparable to the performance of magnetic fusion devices ... and to their generation of data. Since the first plasma pulse on JET in 1983, the raw data collected during each discharge has roughly doubled every two years. Today, about 10 Gigabyte of data is collected per each 40 second pulse; the data collected over 70,000 JET pulses amounts to roughly 35 Terabytes.

When ITER starts operation, the data generated will again reach new dimensions. Each plasma discharge—lasting 300 to 3000 seconds—will generate an estimated tens of Gigabytes per second, leading to a total of a few hundred Petabytes per year. And is not only the storage and archiving of the huge amount of data that poses a challenge, but also its accessibility in real-time.

In a recent workshop organized by Lana Abadie, responsible for the scientific archiving system within the CODAC team, the challenge of storing and accessing the flood of scientific data was addressed by experts from many different institutes and backgrounds.

"We need to store this data almost real-time to allow physicists to start their analysis code in order to allow calculations for the next pulses," explains Lana. "This data is what we call raw data, i.e., data coming from the ITER machine unfiltered. The main producers will be the various diagnostics systems. Then we need to store processed and simulated data. Different physics applications will use raw data and process them. This output needs to be stored too—and made accessible."

In other words, raw, processed, and simulated data will be accessed in the same way. But accessing the data in an efficient way is not an easy task. "Imagine you have a pile of 20,000,000 Ipods of 16GB—equivalent to the yearly production of all types of ITER data. Let's say you are looking for a song that was produced last February, but you don't even know the exact title. You remember that it was something like 'I follow' and that it was a remix of an earlier song by the same artist. Of course, you could spend quite a few hours finding the song. The challenge for CODAC is to provide data access within a few seconds. It is very important to understand the different archiving techniques and to stay abreast of upcoming technologies in that area."
 
The CODAC archiving system has to be ready for First Plasma with a well-proven scalability. The data will be stored first in the CODAC server room and will then be streamed to the IT computing centre. CODAC will develop a first prototype within the next two years. The team is currently studying a system based on HDF5, a well-known scientific data format used by many institutions such as NASA. HDF5 allows the storage of all types of data and corresponding metadata.


return to the latest published articles