我想要一天分享一點「LLM從底層堆疊的技術」,並且每篇文章長度控制在三分鐘以內,讓大家不會壓力太大,但是又能夠每天成長一點。
from vertexai.preview.language_models import ChatModel, InputOutputTextPair
def predict_large_language_model_sample(project_id: str,
model_name: str,
temperature: float,
max_output_tokens: int,
top_p: float,
top_k: int,
location: str = "us-central1"):
vertexai.init(project = project_id, location = location)
chat_model = ChatModel.from_pretrained(model_name)
parameters = {"temperature": temperature,
"max_output_tokens": max_output_tokens,
"top_p": top_p,
"top_k": top_k}
chat = chat_model.start_chat(examples=[])
response = chat.send_message('''What Transformer model are you using for this conversation?''', **parameters)
wrapped_text = textwrap.fill(response.text, width = 40)
print(wrapped_text)
現在執行函數:
predict_large_language_model_sample("aiex-57523", "chat-bison@001", 0.2, 256, 0.8, 40, "us-central1")
結果為:
