Skip to content

ChatHuggingFace cutting total tokens #24125

Discussion options

You must be logged in to vote

Try: llama_chat_model = ChatHuggingFace(llm=llama_llm).bind(max_tokens=8192, temperature=0.0)

Replies: 2 comments 1 reply

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@DebienRuben
Comment options

Answer selected by luizguilhermedev
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants