Journal of East China Normal University(Natural Sc ›› 2014, Vol. 2014 ›› Issue (5): 240-251.doi: 10.3969/j.issn.10005641.2014.05.021

• Article • Previous Articles     Next Articles

Co-OLAP: Research on cooperated OLAP with star schema benchmark on hybrid CPU&GPU platform

 ZHANG  Yu1,2, ZHANG  Yan-Song1,2,3, ZHANG  Bing1,2, CHEN  Hong1,2, WANG  Shan1,2   

  1. 1. DEKE Lab, Renmin University of China, Beijing 100872, China;
    2. School of Information, Renmin University of China, Beijing 100872, China;
    3. National Survey Research Center at Renmin University of China, Beijing 100872, China
  • Online:2014-09-25 Published:2014-11-27

Abstract: Nowadays GPUs have powerful parallel computing capability even for moderate GPUs on moderate servers. Opposite to the recent research efforts, a moderate server may be equipped with several high level CPUs and a moderate GPU, which can provide additional computing power instead of more powerful CPU computing. In this paper, we focus on Co-OLAP(Cooperated OLAP) processing on a moderate workstation to illustrate how to make a moderate GPU cooperate with powerful CPUs and how to distribute data and computation between the balanced computing platforms to create a simple and efficient Co-OLAP model. According to real world configuration, we propose a maximal high performance data distribution model based on RAM size, GPU device memory size, dataset schema and special designed AIR(array index referencing) algorithm. The Co-OLAP model distributes dataset into host and device memory resident datasets, the OLAP is also divided into CPU and GPU adaptive computing to minimize data movement between CPU and GPU memories. The experimental results show that two Xeon six-core CPUs slightly outperform one NVIDA Quadra 5 000 GPU with 352 cuda cores with SF=20 SSB dataset, the Co-OLAP model can assign balanced workload and make each platform simple and efficient.

Key words: GPU, OLAP, Co-OLAP, AIR

CLC Number: