VertexAI - Google [Gemini, Model Garden]
Pre-requisites​
pip install google-cloud-aiplatform
- Authentication:
- run
gcloud auth application-default login
See Google Cloud Docs - Alternatively you can set
application_default_credentials.json
- run
Sample Usage​
import litellm
litellm.vertex_project = "hardy-device-38811" # Your Project ID
litellm.vertex_location = "us-central1" # proj location
response = litellm.completion(model="gemini-pro", messages=[{"role": "user", "content": "write code for saying hi from LiteLLM"}])
OpenAI Proxy Usage​
Here's how to use Vertex AI with the LiteLLM Proxy Server
- Modify the config.yaml
- Different location per model
- One location all vertex models
Use this when you need to set a different location for each vertex model
model_list:
- model_name: gemini-vision
litellm_params:
model: vertex_ai/gemini-1.0-pro-vision-001
vertex_project: "project-id"
vertex_location: "us-central1"
- model_name: gemini-vision
litellm_params:
model: vertex_ai/gemini-1.0-pro-vision-001
vertex_project: "project-id2"
vertex_location: "us-east"
Use this when you have one vertex location for all models
litellm_settings:
vertex_project: "hardy-device-38811" # Your Project ID
vertex_location: "us-central1" # proj location
model_list:
-model_name: team1-gemini-pro
litellm_params:
model: gemini-pro
- Start the proxy
$ litellm --config /path/to/config.yaml
Set Vertex Project & Vertex Location​
All calls using Vertex AI require the following parameters:
- Your Project ID
import os, litellm
# set via env var
os.environ["VERTEXAI_PROJECT"] = "hardy-device-38811" # Your Project ID`
### OR ###
# set directly on module
litellm.vertex_project = "hardy-device-38811" # Your Project ID`
- Your Project Location
import os, litellm
# set via env var
os.environ["VERTEXAI_LOCATION"] = "us-central1 # Your Location
### OR ###
# set directly on module
litellm.vertex_location = "us-central1 # Your Location
Model Garden​
Model Name | Function Call |
---|---|
llama2 | completion('vertex_ai/<endpoint_id>', messages) |
Using Model Garden​
from litellm import completion
import os
## set ENV variables
os.environ["VERTEXAI_PROJECT"] = "hardy-device-38811"
os.environ["VERTEXAI_LOCATION"] = "us-central1"
response = completion(
model="vertex_ai/<your-endpoint-id>",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)
Gemini Pro​
Model Name | Function Call |
---|---|
gemini-pro | completion('gemini-pro', messages) , completion('vertex_ai/gemini-pro', messages) |
Model Name | Function Call |
---|---|
gemini-1.5-pro | completion('gemini-1.5-pro', messages) , completion('vertex_ai/gemini-pro', messages) |
Gemini Pro Vision​
Model Name | Function Call |
---|---|
gemini-pro-vision | completion('gemini-pro-vision', messages) , completion('vertex_ai/gemini-pro-vision', messages) |
Model Name | Function Call |
---|---|
gemini-1.5-pro-vision | completion('gemini-pro-vision', messages) , completion('vertex_ai/gemini-pro-vision', messages) |
Using Gemini Pro Vision​
Call gemini-pro-vision
in the same input/output format as OpenAI gpt-4-vision
LiteLLM Supports the following image types passed in url
- Images with Cloud Storage URIs - gs://cloud-samples-data/generative-ai/image/boats.jpeg
- Images with direct links - https://storage.googleapis.com/github-repo/img/gemini/intro/landmark3.jpg
- Videos with Cloud Storage URIs - https://storage.googleapis.com/github-repo/img/gemini/multimodality_usecases_overview/pixel8.mp4
Example Request
import litellm
response = litellm.completion(
model = "vertex_ai/gemini-pro-vision",
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "Whats in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
}
}
]
}
],
)
print(response)
Chat Models​
Model Name | Function Call |
---|---|
chat-bison-32k | completion('chat-bison-32k', messages) |
chat-bison | completion('chat-bison', messages) |
chat-bison@001 | completion('chat-bison@001', messages) |
Code Chat Models​
Model Name | Function Call |
---|---|
codechat-bison | completion('codechat-bison', messages) |
codechat-bison-32k | completion('codechat-bison-32k', messages) |
codechat-bison@001 | completion('codechat-bison@001', messages) |
Text Models​
Model Name | Function Call |
---|---|
text-bison | completion('text-bison', messages) |
text-bison@001 | completion('text-bison@001', messages) |
Code Text Models​
Model Name | Function Call |
---|---|
code-bison | completion('code-bison', messages) |
code-bison@001 | completion('code-bison@001', messages) |
code-gecko@001 | completion('code-gecko@001', messages) |
code-gecko@latest | completion('code-gecko@latest', messages) |