토니의 연습장

transformer, sentence-transformers, torch 호환 버전 본문

언어 AI (NLP)/LLM & RAG

transformer, sentence-transformers, torch 호환 버전

bellmake 2024. 11. 15. 16:46

로컬 환경 구성시 sentence-transformers 사용 중에 에러 발생 경우 관련하여 확인된 원활한 동작 호환 버전입니다.

 

!pip uninstall transformers sentence-transformers torch -y
!pip install transformers==4.46.2 sentence-transformers==3.3.0 torch==2.5.0

 

[ 정상 동작 결과 ]

 

!pip show transformers
!pip show sentence-transformers
!pip show torch

 

huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Name: transformers
Version: 4.46.2
Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
Home-page: https://github.com/huggingface/transformers
Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)
Author-email: transformers@huggingface.co
License: Apache 2.0 License
Location: /home/joseph/miniconda3/envs/finetuning/lib/python3.11/site-packages
Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm
Required-by: peft, sentence-transformers, trl
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Name: sentence-transformers
Version: 3.3.0
Summary: State-of-the-Art Text Embeddings
Home-page: https://www.SBERT.net
Author: 
Author-email: Nils Reimers <info@nils-reimers.de>, Tom Aarsen <tom.aarsen@huggingface.co>
License: Apache 2.0
Location: /home/joseph/miniconda3/envs/finetuning/lib/python3.11/site-packages
Requires: huggingface-hub, Pillow, scikit-learn, scipy, torch, tqdm, transformers
Required-by: 
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Name: torch
Version: 2.5.0
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: packages@pytorch.org
License: BSD-3-Clause
Location: /home/joseph/miniconda3/envs/finetuning/lib/python3.11/site-packages
Requires: filelock, fsspec, jinja2, networkx, nvidia-cublas-cu12, nvidia-cuda-cupti-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-runtime-cu12, nvidia-cudnn-cu12, nvidia-cufft-cu12, nvidia-curand-cu12, nvidia-cusolver-cu12, nvidia-cusparse-cu12, nvidia-nccl-cu12, nvidia-nvjitlink-cu12, nvidia-nvtx-cu12, sympy, triton, typing-extensions
Required-by: accelerate, bitsandbytes, peft, sentence-transformers, torchaudio, torchvision, xformers

'언어 AI (NLP) > LLM & RAG' 카테고리의 다른 글

cosine_similarity  (2) 2025.01.09
LangChain Hub - handle  (1) 2024.12.18
fine-tuning / instruction-tuning  (0) 2024.11.13
openAI fine-tuning  (1) 2024.11.13
PEFT  (1) 2024.11.13