Max 80% lower price than AWS
We offer a virtual server with powerful GPU for Machine Learning.
AMD Radeon series GPUs,NVIDIA Quadro series are available.
Starting at $0.0992 per Hour
Charge every second instead of every hour.
- AMD GPU-based instances for Machine Learning are available. Drivers and libraries are pre-installed.
- Low-priced Deep Learning Inference instances based on NVIDIA Quadro are available.
- Adopt latest persistent container technology, faster than standard virtual machines. But you can manage same as typical cloud instances.
- Charge every second instead of every hour.
- "1-Click launch" and ultimately efficient UI/UX. Hyper-quick launch and stop.
Powerful and less expensive than AWS
|a1.rx580||6.1||2304||8GB||AMD RADEON RX||$0.3458/h||$249/m|
|a1.vega56||10.5||3584||8GB||AMD RADEON VEGA||$0.4794/h||$345/m|
|a1.vegafe||13.1||4096||16GB||AMD RADEON VEGA||$0.6164/h||$443/m|
* AWS bills additional cost for disk volume usage, IOPS and network traffic usage.
* GPU EATER includes such disk volume and IOPS usage fee.
Reasons that you must try AMD GPUs
for Machine Learning research.
NVIDIA, which dominates the machine learning market, provides drivers under a proprietary license so that they can modify terms and conditions freely. In fact, they changed their EULA relating GeForce/Titan to restrict the data center deployment and commercial hosting etc.
On the other hand, drivers called ROCm contains the Kernel driver and runtime, and libraries for machine learning called MIOpen developed by AMD are subject to open source licenses. Therefore, it is unlikely that commercial use will be restricted even for consumer products.
The MIOpen that runs on ROCm supports two programming models of OpenCL and HIP. And HIP which is a component of ROCm allows developers to convert CUDA code to portable C++. The same source code can be compiled to run on NVIDIA or AMD GPUs.
Also, We have verified the running of Deep Learning's major models on AMD GPUs with TensorFlow1.10 including Dence, CNN, RNN, LSTM, AlexNet, VGG, GoogLeNet, ResNet, YoloV2, SSD, PSPNet, FCN, ICNet etc.
Reasons that you must Try NVIDIA Quadro GPU
for Deep Learning Inference.
Currently, major cloud companies such as Amazon Web Services (AWS) offer instances based on NVIDIA TESLA GPUs, and the usage of such services for Deep Learning Inference requires that they are run continuously without stoppage. In the case of AWS, however, users are required to pay approximately $ 658.80 per month (Linux on p2.xlarge), a cost which can be prohibitive for small-scale research facilities, universities as well as research and development personnel at smaller enterprises.
Our research, however, has revealed that with Deep Learning Inference for tasks such as Image Classification and Object Recognition, it is far more common for the CPUs and the web API implementation to act as a bottleneck, rather than the GPU. Therefore, we have concluded that in many cases, it is possible to increase cost effectiveness over machines with expensive CPUs and GPUs by utilizing multiple machines combining moderate CPUs with lower-specification GPUs.