Used by Data Scientists in 16 countries

We offer virtual servers with various GPUs for Machine Learning.

NVIDIA Quadro, AMD Radeon Vega and Radeon RX instances are available.

Starting at $0.0992 per Hour

By signing up, you agree to the Terms of Service.

Reasons that you must Try NVIDIA Quadro GPU
for Deep Learning Inference.

Currently, major cloud companies such as Amazon Web Services (AWS) offer instances based on NVIDIA TESLA GPUs, and the usage of such services for Deep Learning Inference requires that they are run continuously without stoppage. In the case of AWS, however, users are required to pay approximately $ 658.80 per month (Linux on p2.xlarge), a cost which can be prohibitive for small-scale research facilities, universities as well as research and development personnel at smaller enterprises.

Our research, however, has revealed that with Deep Learning Inference for tasks such as Image Classification and Object Recognition, it is far more common for the CPUs and the web API implementation to act as a bottleneck, rather than the GPU. Therefore, we have concluded that in many cases, it is possible to increase cost effectiveness over machines with expensive CPUs and GPUs by utilizing multiple machines combining moderate CPUs with lower-specification GPUs.

Reasons that you must try AMD GPUs
for Machine Learning research.

NVIDIA, which dominates the machine learning market, provides drivers under a proprietary license so that they can modify terms and conditions freely. In fact, they changed their EULA relating GeForce/Titan to restrict the data center deployment and commercial hosting etc.

On the other hand, drivers called ROCm contains the Kernel driver and runtime, and libraries for machine learning called MIOpen developed by AMD are subject to open source licenses. Therefore, it is unlikely that commercial use will be restricted even for consumer products.

The MIOpen that runs on ROCm supports two programming models of OpenCL and HIP. And HIP which is a component of ROCm allows developers to convert CUDA code to portable C++. The same source code can be compiled to run on NVIDIA or AMD GPUs. 

Also, We have verified the running of Deep Learning's major models on AMD GPUs with TensorFlow1.3 including Dence, CNN, RNN, LSTM, AlexNet, VGG, GoogLeNet, ResNet, YoloV2, SSD, PSPNet, FCN, ICNet etc.


Charge every second instead of every hour.


  • AMD GPU-based instances for Machine Learning are available. Drivers and libraries are pre-installed.
  • Low-priced Deep Learning Inference instances based on NVIDIA Quadro are available.
  • Adopt latest persistent container technology, faster than standard virtual machines. But you can manage same as typical cloud instances.
  • Charge every second instead of every hour.
  • "1-Click launch" and ultimately efficient UI/UX. Hyper-quick launch and stop.

Use caseMachine learning like a Deep Learning, and high performance databases, computational fluid dynamics, video encoding, 3D graphics workstation, 3D rendering, VFX, computational finance, seismic analysis, molecular modeling, genomics, and other server-side GPU compute workloads.


Powerful and less expensive than AWS

Provider Plan TFlops Cores GPU Mem Architecture Houlry Monthly
n1.p400 0.641 256 2GB NVIDIA Pascal $0.0992/h $71/m
n1.p600 1.195 384 2GB NVIDIA Pascal $0.1058/h $345/m
n1.p1000 1.195 384 2GB NVIDIA Pascal $0.1058/h $79/m
n1.p1000 1.894 640 4GB NVIDIA Pascal $0.3306/h $238/m
p2.xlarge 4.3 2496 12GB NVIDIA Kepler $0.9000/h $648/m
g3.4xlarge 4.8 2048 8GB NVIDIA Maxwell $1.1400/h $820/m
n1.p4000 5.2 1792 8GB NVIDIA Pascal $0.7936/h $571/m
a1.rx580 6.1 2304 8GB AMD RADEON RX $0.3458/h $249/m
a1.vega56 10.5 3584 8GB AMD RADEON VEGA $0.4794/h $345/m
a1.vegafe 13.1 4096 16GB AMD RADEON VEGA $0.6164/h $443/m
p3.2xlarge 14.0 5120 16GB NVIDIA Volta $3.0600/h $2203/m

* AWS bills additional cost for disk volume usage, IOPS and network traffic usage.
* GPU EATER includes such disk volume and IOPS usage fee.

Get started with GPU eater