Keynote Speakers - CloudTech'17



Keynote 1 : Scheduling Different Types of Applications in the Cloud

Prof. Eleni Karatza Department of Informatics, Aristotle University of Thessaloniki, Greece

Abstract:

During the last few years the increasing popularity of cloud computing has offered computational services to many scientists, consumers and enterprises as utilities, on a pay-per-use approach. Users can access applications and data from the cloud on demand on a pay-as-you-go basis. Therefore, cloud computing is a cost-effective infrastructure for running computationally intensive applications. There are important issues that must be addressed in cloud computing, such as: performance, resource allocation, efficient scheduling, energy conservation, reliability, protection of sensitive data, security and trust, cost, availability, quality. Effective management of cloud resources is crucial to use effectively the power of these systems and achieve high system performance.
The cloud computing paradigm can offer various types of services, such as computational resources for complex applications, web services, social networking, urban mobility, health care, environmental science, etc. Furthermore, the simultaneous usage of services from different clouds can have additional benefits such as lower cost and high availability.
Energy-efficient job scheduling is an effective technique to decrease the energy consumption in the cloud and therefore to reduce the cost and to minimize the impact of cloud computing on the environment.
Complex multiple-task applications may have precedence constraints and specific deadlines and may impose several restrictions and QoS requirements, therefore resource allocation and scheduling is a difficult task in clouds where there are many alternative heterogeneous computers. The scheduling algorithms must seek a way to maintain a good response time to leasing cost ratio.
In this talk we will present state-of-the-art research covering a variety of concepts on job scheduling in the cloud, based on existing or simulated cloud systems, that provide insight into problems solving and we will provide future directions in the cloud computing area.

About the Speaker:
Eleni Karatza , is a Professor in the Department of Informatics at the Aristotle University of Thessaloniki, Greece. Dr. Karatza’s research interests include Computer Systems Modeling and Simulation, Performance Evaluation, Grid and Cloud Computing, Energy Efficiency in Large Scale Distributed Systems, Resource Allocation and Scheduling and Real-time Distributed Systems.
Professor Karatza has authored or co-authored over 190 technical papers and book chapters including four papers that earned best paper awards at international conferences. She is senior member of IEEE, ACM and SCS, and she served as an elected member of the Board of Directors at Large of the Society for Modeling and Simulation International (2009-2011). She has served as General Chair, Program Chair and Keynote Speaker in International Conferences.
Professor Karatza is the Editor-in-Chief of the Elsevier Journal “Simulation Modeling Practice and Theory”, Area Editor of the “Journal of Systems and Software” of Elsevier, and she has been Guest Editor of Special Issues in multiple International Journals. http://agent.csd.auth.gr/~karatza/


Keynote 2 : Towards Extreme Scale Computing - A System Perspective in the Cloud and Big Data Era

Prof. Ching-Hsien Hsu Department of computer science and information engineering, Chung Hua University, Taiwan

Abstract:

This talk attempts to address a few critical issues, trends and opportunities on high performance computing in the cloud and big data era. Dr. Hsu will assess impacts of cloud and big data ecosystems on the development of high performance computing systems. The cloud & big data ecosystem includes the hardware and software infrastructure, new computing architectures, data center sustainability, and service models applied in supporting the emerging Big Data Intelligence and Internet of Things (IoT). In particular, a few enabling technologies, such as NoSQL, parallel data processing, energy efficient, load balancing, data locality and virtualization will be addressed.

About the Speaker:
Ching-Hsien Hsu , is a professor and the chairman in the department of computer science and information engineering at Chung Hua University, Taiwan; He was distinguished chair professor at Tianjin University of Technology, China, during 2012-2016. His research includes high performance computing, cloud computing, parallel and distributed systems, big data analytics. He has published 200 papers in top journals such as IEEE TPDS, IEEE TSC, ACM TOMM, IEEE TCC, IEEE TETC, IEEE System, IEEE Network, top conference proceedings, and book chapters in these areas. Dr. Hsu is the editor-in-chief of International Journal of Grid and High Performance Computing, and International Journal of Big Data Intelligence; and serving as editorial board for a number of prestigious journals, including IEEE Transactions on Service Computing, IEEE Transactions on Cloud Computing, International Journal of Communication Systems, International Journal of Computational Science. He has been acting as an author/co-author or an editor/co-editor of 10 books from Elsevier, Springer, IGI Global, World Scientific and McGraw-Hill. Dr. Hsu was awarded nine times distinguished award for excellence in research and annual outstanding research award through 2005 to 2016 from Chung Hua University; special talent award from Ministry of Education (2012-2015), and National Science Council (2010-2016), Taiwan. Since 2008, he has been serving as executive committee of IEEE Technical Committee of Scalable Computing; IEEE Special Technical Committee Cloud Computing; Taiwan Association of Cloud Computing. He is vice chair of IEEE Technical Committee on Cloud Computing, IEEE TCSC and IEEE senior member.


Keynote 3 : HPC Meets Cloud: Opportunities and Challenges in Designing High-Performance MPI and Big Data Libraries on Virtualized InfiniBand Clusters

Dr. Dhabaleswar K. Panda Department of Computer Science and Engineering, Ohio State University, USA

Abstract:

Significant growth has been witnessed during the last few years in HPC clusters with multi-/many-core processors, accelerators, and high-performance interconnects (such as InfiniBand, Omni-Path, iWARP, and RoCE). To alleviate the cost burden, sharing HPC cluster resources to end users through virtualization for both scientific computing and Big Data processing is becoming more and more attractive. The recently introduced Single Root I/O Virtualization (SR-IOV) technique for InfiniBand and High Speed Ethernet provides native I/O virtualization capabilities and is changing the landscape of HPC virtualization. However, SR-IOV lacks locality-aware communication support, which leads to performance overheads for inter-VM communication even within the same host. In this talk, we will first present our recent studies done on MVAPICH2-Virt MPI library over virtualized SR-IOV-enabled InfiniBand clusters, which can fully take advantage of SR-IOV and IVShmem to deliver near-native performance for HPC applications under Standalone, OpenStack, Docker, and Singularity environments. In the second part, we will present a framework for extending SLURM with virtualization-oriented capabilities, such as dynamic virtual machine creation with SR-IOV and IVShmem resources, to effectively run MPI jobs over virtualized InfiniBand clusters. Finally, we will demonstrate how high-performance solutions can be designed to run Big Data applications (like Hadoop) in HPC cloud environments.

About the Speaker:
Dhabaleswar K. Panda , is a Professor and University Distinguished Scholar of Computer Science and Engineering at the Ohio State University. He has published over 400 papers in the area of high-end computing and networking. The MVAPICH2 (High Performance MPI and PGAS over InfiniBand, Omni-Path, iWARP and RoCE) libraries, designed and developed by his research group (http://mvapich.cse.ohio-state.edu), are currently being used by more than 2,750 organizations worldwide (in 84 countries). More than 414,000 downloads of this software have taken place from the project's site. As of Nov’16, this software is empowering several InfiniBand clusters (including the 1st, 13th, 17th, and 40th ranked ones) in the TOP500 list. The RDMA packages for Apache Spark, Apache Hadoop, Apache HBase, and Memcached together with OSU HiBD benchmarks from his group (http://hibd.cse.ohio-state.edu) are also publicly available. These libraries are currently being used by more than 215 organizations in 29 countries. More than 21,300 downloads of these libraries have taken place. He is an IEEE Fellow. More details about Prof. Panda are available at http://www.cse.ohio-state.edu/~panda.


Keynote 4 : Improving Fault Tolerance in Large Scale Cloud Data Centers through Software Defined Networking

Dr. Abdelmounaam Rezgui Department of Computer Science and Engineering at New Mexico Tech

Abstract:

Cloud data centers (CDCs) may contain hundreds or thousands of servers along with network switches, links, routers, firewalls, power supplies, storage devices, and several other types of hardware elements. Software that runs in a cloud data center includes management and control software (e.g., virtualization software), networking protocols, open-source code, customer-developed code, and various applications with known or unknown origins. This makes CDCs complex computing environments where it is extremely difficult to predict when and where the next failure in the data center will occur. For cloud providers, the ability to predict or quickly detect and react to failures is of paramount importance. For example, studies have shown that, in a typical cloud’s first year of usage, roughly 1000 individual machine failures will occur. Each minute of downtime can cost thousands of dollars on average. Moreover, fault tolerance helps avoid costly SLA violations and preserve business reputation. Fault tolerance is therefore a crucial requirement in cloud data centers. In this talk, I will first present traditional approaches to fault tolerance in cloud data centers (reactive, proactive, etc.) In the second part of my talk, I will focus on recent research that uses software defined networking to achieve better fault tolerance in cloud data centers.

About the Speaker:
Abdelmounaam Rezgui, is an professor in the Department of Computer Science and Engineering at New Mexico Tech where he is the Director of the Cloud Computing and Big Data (C2BD) Lab. Dr. Rezgui published over 80 papers in prestigious journals and highly selective conferences including: IEEE Transaction on Big Data, IEEE Transactions on Knowledge and Data Engineering (TKDE), ACM Transactions on Internet Technologies (TOIT), IEEE Transactions on Services Computing (TSC), IEEE Transactions on Parallel and Distributed Systems (TPDS), IEEE Internet Computing, IEEE Security and Privacy, IEEE CLOUD, IEEE ICDE, IEEE SocialCom, IEEE BDCloud, IEEE IC2E, IEEE SCC, IEEE SOCA, ACM SAC, and IEEE MASCOTS. Two of his conference papers received the best paper award. His research is funded by NASA, Microsoft, and ICASA. He is on the editorial board of several journals including Springer’s Big Data Analytics and the International Journal of Internet of Things and Big Data (IJITBD). Abdelmounaam currently serves on the program committees of several conferences including: IEEE BigData, IEEE CloudNet, IEEE LCN, and IEEE BigDataSE. More details can be found at: https://www.cs.nmt.edu/~rezgui

                                               University of Luxembourg