Nireas-IWRC Information Technology (IT) Infrastructure

Nireas-IWRC Information Technology (IT) Infrastructure

The biggest challenge for Nireas-IWRC is the blending of the laboratory functions and locations into a single research entity, and the enabling of information technologies that will facilitate Nireas-IWRC’s vision. Nireas-IWRC intends to develop its IT network in ways that will enable productive research and efficient exchange of knowledge, cost efficient operation and sustainability.

For these reasons, Nireas-IWRC intends to implement a centrally administered network with application and file servers, and remote “thin clients” that will enable the integration and sharing of software and files from a single point, and their utilization from multiple remote locations. The envisioned IT network comprises of:

  • virtualized servers: 2 servers, each a 2U dual-processor rack server, housing the virtualization middleware VMWare and Virtual PC environments (Windows XP Pro, applications, data and Softphone software). The server can support up to 50 virtual clients on Windows XP, with additional users added (as needed) in the future as NIREAS-IWRC’s computational needs increase.
  • system administration console (1 server with Windows 2003 Server) that manages the virtualised server and guards the Virtual PC templates.
  • An optional IP telephony server for VoIP support.
  • Stationary and mobile thin clients with the capacity to process integrated video.

The virtual PC IT architecture provides a secure computing platform for NIREAS-IWRC’s enterprise data with significant multi-media capabilities, at a lower total cost of ownership than typical PCs. More importantly, with this solution each researcher’s desktop instantaneously and securely “follows” him/her throughout the enterprise (data, applications, and user profiles are available from any location in the organization).

                In several thematic areas, the Center has identified high priority research objectives that require the solution of complex partial differential equations in complex geometries and on large-scale, three-dimensional grids. Such problems are demanding in terms of both computational know-how and computing power, and in general require the use of parallel computing methods.  Members of the core staff of the Center have a proven track record in leading state-of-the-art high performance computing projects. In order to ensure that the Center will also have the computing power needed to achieve its primary objectives, it is important for the Center to acquire a medium size parallel computing system. The cost of high performance computing equipment, in the form of distributed memory multi-core systems, has become steadily more affordable during the last decade, and it is now quite realistic for research centers to acquire and maintain their own systems.  The development of the state-the-art computational models using modern parallel computing techniques  is one of the strategic objectives of the center. This approach will ensure, on one hand, the longevity and adaptability of the developed computational tools, and on the other, will provide the center with enough computing power to tackle even the more complex and computationally demanding problems in its agenda. For this purpose the Center plans the Acquisition of an integrated High Performance Computing (HPC) system in form of a cluster consisting of 12 to 16 computational nodes, 1 enhanced master node, and a mass storage system in the form of an iSCSI RAID array connected through a dedicated parallel Ethernet network. The integrated HPC system will be installed on a 24U solid metal cabinet with sliding racks for easy access and with basic HPC software and utilities, such as compilers and libraries, pre-installed. The HPC system will be acquired as an integrated solution to ensure optimal performance and compatibility with existing and future software systems. The system will be designed to have peak theoretical performance for each compute node of at least 300 on the SPECint_rate_base2006 benchmark. To achieve this performance, each compute node will consist of two six-core processors, such as the Intel Xeon 5650 or better.  Each node will also carry at least 48GB of memory (DDR3 RDIMM RAM), expandable to at least 128GB, a single hard disk with at least 160Gb capacity, an Integrated Dedicated Management Card (IPMI2.0 compliant) to streamline system maintenance and performance, high-throughput/low-latency inter-node communication (InfiniBand based), and a dual-Port Gigabit Ethernet card. The enhanced master node will carry 24GB DDR3 RDIMM RAM (expandable to 192GB) and four hard drives (500GB SAS 7200RPM 6Gbps 3.5″ HD Hot Plug) configurable in RAID10. In addition, the front-end will be connected to a mass-storage device with at least 12TB raw storage capacity (iSCSI SAS drives, with the option to expand raw capacity to 96TB minimum) and 2 hot swappable redundant power supplies. The storage system will contain two Integrated Management Controllers (4 1-gigabit Ports per Controller) and expected to have a 2U profile. Computing communication between nodes will be achieved through a dedicated high-throughput/low-latency inter-node communication fabric (InfiniBand 10GB/s data transfer rate) that will operate through a specialized switch. Dual port Ethernet controllers are envisioned on each node to achieve communication redundancy and failure recovery.

Scroll to Top