|
|
2 mesiacov pred | |
|---|---|---|
| .. | ||
| model_repo | 2 mesiacov pred | |
| scripts | 4 mesiacov pred | |
| Dockerfile.server | 3 mesiacov pred | |
| README.md | 2 mesiacov pred | |
| client_grpc.py | 2 mesiacov pred | |
| client_http.py | 4 mesiacov pred | |
| docker-compose.yml | 4 mesiacov pred | |
| offline_inference.py | 2 mesiacov pred | |
| requirements.txt | 4 mesiacov pred | |
| run.sh | 2 mesiacov pred | |
| run_stepaudio2_dit_token2wav.sh | 2 mesiacov pred | |
| token2wav.py | 2 mesiacov pred | |
| token2wav_dit.py | 2 mesiacov pred | |
Contributed by Yuekai Zhang (NVIDIA).
Launch the service directly with Docker Compose:
docker compose up
To build the image from scratch:
docker build . -f Dockerfile.server -t soar97/triton-cosyvoice:25.06
your_mount_dir=/mnt:/mnt
docker run -it --name "cosyvoice-server" --gpus all --net host -v $your_mount_dir --shm-size=2g soar97/triton-cosyvoice:25.06
run.shThe run.sh script orchestrates the entire workflow through numbered stages.
You can run a subset of stages with:
bash run.sh <start_stage> <stop_stage> [service_type]
<start_stage>: The stage to start from (0-5).<stop_stage>: The stage to stop after (0-5).Stages:
cosyvoice-2 0.5B model from HuggingFace.Decoupled=True (streaming) or Decoupled=False (offline) will be used.Inside the Docker container, prepare the models and start the Triton server by running stages 0-3:
# This command runs stages 0, 1, 2, and 3
bash run.sh 0 3
[!TIP] Both streaming and offline (non-streaming) TTS modes are supported. For streaming TTS, set
Decoupled=True. For offline TTS, setDecoupled=False. You need to rerun stage 2 if you switch between modes.
Sends a single HTTP inference request. This is intended for testing the offline TTS mode (Decoupled=False):
bash run.sh 4 4
To benchmark the running Triton server, pass streaming or offline as the third argument:
bash run.sh 5 5 # [streaming|offline]
# You can also customize parameters such as the number of tasks and the dataset split:
# python3 client_grpc.py --num-tasks 2 --huggingface-dataset yuekai/seed_tts_cosy2 --split-name test_zh --mode [streaming|offline]
[!TIP] It is recommended to run the benchmark multiple times to get stable results after the initial server warm-up.
For offline inference mode benchmark, please check the below command:
# install FlashCosyVoice for token2wav batching
# git clone https://github.com/yuekaizhang/FlashCosyVoice.git /workspace/FlashCosyVoice -b trt
# cd /workspace/FlashCosyVoice
# pip install -e .
# cd -
# wget https://huggingface.co/yuekai/cosyvoice2_flow_onnx/resolve/main/flow.decoder.estimator.fp32.dynamic_batch.onnx -O $model_scope_model_local_dir/flow.decoder.estimator.fp32.dynamic_batch.onnx
bash run.sh 6 6
# You can also switch to huggingface backend by setting backend=hf
The following results were obtained by decoding on a single L20 GPU with 26 prompt audio/target text pairs from the yuekai/seed_tts dataset (approximately 170 seconds of audio):
Client-Server Mode: Streaming TTS (First Chunk Latency) | Mode | Concurrency | Avg Latency (ms) | P50 Latency (ms) | RTF | |---|---|---|---|---| | Streaming, use_spk2info_cache=False | 1 | 220.43 | 218.07 | 0.1237 | | Streaming, use_spk2info_cache=False | 2 | 476.97 | 369.25 | 0.1022 | | Streaming, use_spk2info_cache=False | 4 | 1107.34 | 1243.75| 0.0922 | | Streaming, use_spk2info_cache=True | 1 | 189.88 | 184.81 | 0.1155 | | Streaming, use_spk2info_cache=True | 2 | 323.04 | 316.83 | 0.0905 | | Streaming, use_spk2info_cache=True | 4 | 977.68 | 903.68| 0.0733 |
If your service only needs a fixed speaker, you can set
use_spk2info_cache=Trueinrun.sh. To add more speakers, refer to the instructions here.
Client-Server Mode: Offline TTS (Full Sentence Latency) | Mode | Note | Concurrency | Avg Latency (ms) | P50 Latency (ms) | RTF | |---|---|---|---|---|---| | Offline, Decoupled=False, use_spk2info_cache=False | Commit | 1 | 758.04 | 615.79 | 0.0891 | | Offline, Decoupled=False, use_spk2info_cache=False | Commit | 2 | 1025.93 | 901.68 | 0.0657 | | Offline, Decoupled=False, use_spk2info_cache=False | Commit | 4 | 1914.13 | 1783.58 | 0.0610 |
Offline Inference Mode: Hugginface LLM V.S. TensorRT-LLM | Backend | Batch Size | llm_time_seconds | total_time_seconds | RTF | |---------|------------|------------------|-----------------------|--| | HF | 1 | 39.26 | 44.31 | 0.2494 | | HF | 2 | 30.54 | 35.62 | 0.2064 | | HF | 4 | 18.63 | 23.90 | 0.1421 | | HF | 8 | 11.22 | 16.45 | 0.0947 | | HF | 16 | 8.42 | 13.78 | 0.0821 | | TRTLLM | 1 | 12.46 | 17.31 | 0.0987 | | TRTLLM | 2 | 7.64 |12.65 | 0.0739 | | TRTLLM | 4 | 4.89 | 9.38 | 0.0539 | | TRTLLM | 8 | 2.92 | 7.23 | 0.0418 | | TRTLLM | 16 | 2.01 | 6.63 | 0.0386 |
To launch an OpenAI-compatible API service, run the following commands:
git clone https://github.com/yuekaizhang/Triton-OpenAI-Speech.git
cd Triton-OpenAI-Speech
pip install -r requirements.txt
# After the Triton service is running, start the FastAPI bridge:
python3 tts_server.py --url http://localhost:8000 --ref_audios_dir ./ref_audios/ --port 10086 --default_sample_rate 24000
# Test the service with curl:
bash test/test_cosyvoice.sh
[!NOTE] Currently, only the offline TTS mode is compatible with the OpenAI-compatible server.
This work originates from the NVIDIA CISI project. For more multimodal resources, please see mair-hub.