Python AxmlParser
Fast CUDA Kernels for ResNet Inference. Using Winograd algorithm to optimize the efficiency of co...
📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, ...
DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop va...
A high-throughput and memory-efficient inference and serving engine for LLMs
Benchmarking code for running quantized kernels from vLLM and other libraries
SGLang is a fast serving framework for large language models and vision language models.
Receipts for creating AI Applications with APIs from DashScope (and friends)!