GPU-accelerated password recovery and hash cracking supporting hundreds of algorithms and attack modes.
Browse & filter
Filter by platform, license text, maturity, maintenance cadence, and editorial tags like privacy-focused or self-hosted. Search matches names, summaries, tags, and use cases.
11 tools match your filters
High-throughput LLM serving with PagedAttention, continuous batching, and OpenAI-compatible APIs for GPU clusters.
Structured generation language for fast serving: RadixAttention, constrained decoding, and multi-turn batching for frontier-class workloads.
NVIDIA TensorRT–based library for optimized LLM inference on GPUs with multi-GPU and speculative decoding features.
Composable transformations (grad, vmap, pmap) plus NumPy-like API for high-performance ML research on accelerators.
Multi-framework inference server for TensorRT, ONNX, PyTorch, Python backends—dynamic batching, ensembles, and GPU sharing.
Alibaba’s high-performance LLM inference engine (CUDA-focused) for production serving of diverse decoder architectures.
Physics-ML / scientific deep learning framework: neural operators, PINNs, and domain-parallel training on GPUs.
GPU-accelerated cross-platform terminal emulator.
GPU terminal with ligatures, images, and multiplexing features.
Redirects OpenGL rendering to a server GPU and streams frames for remote 3D and video workloads.
