Our quantum physics-based computation and AI capabilities are optimized with a cloud architecture that allows us to benefit from the security, scalability, flexibility and efficiency of cloud computing. The cloud architecture is designed for multi-cloud capacity and supported by leading public cloud service providers. We are able to adjust different cloud computing clusters across geolocations to scale up our computing capacity to hundreds of thousands of cores in minutes to accelerate the calculation process and timely deliver results to our customers or collaborators. In addition, we adopt a cloud-native design of our computing architecture, which allows us to quickly update our software in response to the evolving industry requirements.
XtalPi's cloud architecture is designed for multi-cloud capacity from the get-go. It has built successful relationships with many cloud service providers such as AWS, Tencent Cloud, Google Cloud.
XtalPi's cloud computing system is integrated with its cloud services, which keep defying the limits of its total computing resource.
Based on the scaling services of our collaborated cloud service providers, the XtalPi team can easily set up an independent computing cluster and scale up to millions of cores in hours, which speed up the calculation process and benefit to the drug design and development.
XtalPi's job orchestration system could seamlessly integrate with different cloud computing clusters, which ensure the scalability on different clouds while keeping the overall CPU usage above 90%.
XtalPi's computing architecture follows the principle of cloud-native design, which has highly customized all the algorithms into docker services, which are automatically shipped to the cloud via full-stack DevOps toolkits. In response to a combination of industrial requirements and high-performance computing scenarios, XtalPi has set up a very flexible R&D pipeline to quickly and automatically deploy software updates.
At XtalPi, we are managing the multiple types of data storage problems as the data lake design, which helps us to visualize the data easily and quickly aggregate the data for further data analysis and AI modeling. With the increasing of data, the machine learning model could be iteratively trained to archive better performance.