The spaceborne computer displayed in a mockup of the ISS before its launch last year
The spaceborne computer displayed in a mockup of the ISS before its launch last year. [Photo: courtesy of HPE]

We’re a long way from the HAL-9000 (thankfully), but NASA is considering a bigger role for high-end computers in deep-space missions, such as a journey to Mars. To prepare, the International Space Station has been hosting a system built by Hewlett Packard Enterprise (HPE) for the past 11 months. The initial findings, according to HPE: It works without major glitches.

The system, an Apollo 4000-series enterprise server, is considered a “supercomputer” because it can perform 1 trillion calculations per second (one teraflop). That’s not so rare nowadays, but it’s way more computing power than NASA has had in space. Those resources can do complex analysis on large amounts of data that aren’t practical to beam back to Earth.

The key aspect of this test was to see if a standard, off-the-shelf computer could survive the abuse of life in space–especially radiation exposure–using only software modifications.

The computer will get a full evaluation when it returns to Earth later this year, but HPE says it’s already learned three valuable lessons:

  • Software can protect a system: The Apollo 4000 constantly monitored the performance of key components for possible effects from radiation. Whenever one operated out of parameters, the system hunkered down in idle mode, and then did a full health check before resuming.
  • You can’t count on the internet: HPE’s software was written assuming near-constant internet access, which is not the reality in space. HPE is considering modifications not just for spaceborne systems but for any running in remote locations. Read More
IBM claims that Summit is currently the world’s “most powerful and smartest scientific supercomputer” with a peak performance of a whopping 200,000 trillion calculations per second.
IBM claims that Summit is currently the world’s “most powerful and smartest scientific supercomputer” with a peak performance of a whopping 200,000 trillion calculations per second.

IBM and the U.S. Department of Energy’s Oak Ridge National Laboratory (ORNL) today unveiled Summit, the department’s newest supercomputer. IBM claims that Summit is currently the world’s “most powerful and smartest scientific supercomputer” with a peak performance of a whopping 200,000 trillion calculations per second. That performance should put it comfortably at the top of the Top 500 supercomputer ranking when the new list is published later this month. That would also mark the first time since 2012 that a U.S.-based supercomputer holds the top spot on that list.

Summit, which has been in the works for a few years now, features 4,608 compute servers with two 22-core IBM Power9 chips and six Nvidia Tesla V100 GPUs each. In total, the system also features over 10 petabytes of memory. Given the presence of the Nvidia GPUs, it’s no surprise that the system is meant to be used for machine learning and deep learning applications, as well as the usual high performance computing workloads for research in energy and advanced materials that you would expect to happen at Oak Ridge.

IBM was the general contractor for Summit and the company collaborated with Nvidia, RedHat and InfiniBand networking specialists Mellanox on delivering the new machine.

“Summit’s AI-optimized hardware also gives researchers an incredible platform for analyzing massive datasets and creating intelligent software to accelerate the pace of discovery,” said Jeff Nichols, ORNL associate laboratory director for computing and computational sciences, in today’s announcement.

Read More