top of page
Edition

On-Premise

To use Scaletorch On-Premises, the Deep Learning Offload Processor is a must. The DLOP will help to offload processes from GPU and execute it asynchronously.

​

Set up is easy! Add DLOP to your existing network and set up the Scaletorch Controller. Software will automatically offload functions to DLOP and speed up the training. 

dlop_model.png

How It  Works?

Group 236.png

Scaletorch DLOP is installed on a standard x86 server with high speed NICs.

​

40 Gbps or higher

Customers can purchase hardware from Scaletorch or use servers of their choice.

Multiple DLOPs can be clustered together for scalability.

Customer gets a simple Web UI, CLI, or API to launch AI training Jobs

DLOP Specifications

DLOP appliances are available in a 1U rack-mountable form factor.

Configurations

Cores

128 to 256 cores

Network Interfaces

40Gb/s to 400Gb/s

dlop_model.png
Pricing Plans

What is the price? 

Thanks for submitting! 🙌

Have a look at our another products:

bottom of page