Replies: 3 comments
-
Hey @fahim9778! How's it going? I'm here to help you with your issue. Let's tackle this together! To stop the Here is the relevant code snippet: def _get_parameters(self, stop: Optional[List[str]]) -> Dict[str, Any]:
"""
Performs sanity check, preparing parameters in format needed by llama_cpp.
Returns:
Dictionary containing the combined parameters.
"""
params = self._default_params
# llama_cpp expects the "stop" key not this, so we remove it:
stop_sequences = params.pop("stop_sequences")
# then sets it as configured, or default to an empty list:
params["stop"] = stop or stop_sequences or self.stop or []
return params Additionally, you can use the class MetaLlamaChat(ChatWrapper):
"""Wrapper for Meta-Llama-3-8B-Instruct model."""
@property
def _llm_type(self) -> str:
return "meta-llama-3-8b-instruct"
sys_beg: str = "<s>[INST] <<SYS>>\n"
sys_end: str = "\n<</SYS>>\n\n"
ai_n_beg: str = " "
ai_n_end: str = " </s>"
usr_n_beg: str = "<s>[INST] "
usr_n_end: str = " [/INST]"
usr_0_beg: str = ""
usr_0_end: str = " [/INST]" By using this wrapper, you can ensure that the model follows the correct prompt structure and stops generating text when it encounters the |
Beta Was this translation helpful? Give feedback.
-
I am looking for a human help. Anyone please? |
Beta Was this translation helpful? Give feedback.
-
Hi @fahim9778, I had a similar problem, and it turned out to have a really simple solution I tried using the ChatHuggingFace class from LangChain (
When I started using this wrapper, I got better answers from the "meta-llama/Meta-Llama-3-8B-Instruct" model. I also found the Model Cards & Prompt formats for Llama 3, and I would recommend using those special tokens in your SystemMessagePrompt at least. Hope that works! |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
I am trying to get rid of this self-chattiness following several methods found over the internet. But no solution yet. Can anyone please help with this? I am stuck with MS project for last 7 days, burning GPU memories and allocation hours with no result.
it begins its talkative menace :
Can you please help to kill off this annoyance?? Thanks in advance!
I tried with
meta-llama/Llama-2-7b-chat-hf
and still the same chattiness:System Info
Beta Was this translation helpful? Give feedback.
All reactions