To use Scaletorch On-Premises, the Deep Learning Offload Processor is a must. The DLOP will help to offload processes from GPU and execute it asynchronously.
Set up is easy! Add DLOP to your existing network and set up the Scaletorch Controller. Software will automatically offload functions to DLOP and speed up the training.
How It Works?
Scaletorch DLOP is installed on a standard x86 server with high speed NICs.
40 Gbps or higher
Customers can purchase hardware from Scaletorch or use servers of their choice.
Multiple DLOPs can be clustered together for scalability.
Customer gets a simple Web UI, CLI, or API to launch AI training Jobs