
Create Text Embeddings in Generative AI

Create Text Embeddings in Generative AI
Use the cohere.embed models in OCI
Generative AI to convert text to vector
 embeddings to use in applications for semantic searches, text classification, or text
 clustering.
ConsoleCLIAPI
In the navigation bar of the Console, select a region with Generative AI, for example, US Midwest
 (Chicago) or UK South (London). See which models are offered in your
 region.
Open the navigation menu and click Analytics & AI. Under AI Services, click Generative AI.
Select a compartment that you have permission to work in. If you don't see the
 playground, ask an administrator to give you access to
 Generative AI resources and then return to the following steps.
Click Playground.
Click Embedding.
Select a model for creating text embeddings by performing one of the following
 actions:
In the Model list, select a model.
Click View model details, and then click Choose
 model.
(Optional) 
 To use an example from the Example list, use the following
 steps:
Select an example from the Example list.
Click Run to generate embeddings for the example.
Review a two-dimensional version of the output in the Output vector
 projection section. 
To visualize the output with embeddings, output vectors are projected into two
 dimensions and plotted as points. Points that are close together correspond to
 phrases that the model considers similar.
Click Clear to remove all the sentences and start generating
 embeddings for new sentences. 
In the Sentence input area, enter text in one of the following
 ways:
Type a sentence in the 1. box, and then click
 Add sentence to add more sentences.
Click Upload file and select a file with text that you want
 to add.
 Note Only files with a .txt extension are allowed. Each
 input sentence, phrase, or paragraph must be separated with a newline character. A
 maximum 96 inputs are allowed for each run, and each input must be less than 512 tokens.
 You can add sentences manually or upload more than one file until you reach the maximum
 number of inputs. 
For the Truncate parameter, choose whether to truncate the start
 or end tokens when the tokens exceed the maximum number of allowed tokens (512).
 Tip For input that exceeds 512 tokens, if you set the
 Truncate parameter to None, you get an
 error message. Before you run an embedding model, choose Start or
 End for the Truncate parameter.
Click Run.
Review a two-dimensional version of the output in the Output vector
 projection section.
To visualize the outputs with embeddings, output vectors are projected into two
 dimensions and plotted as points. Points that are close together correspond to phrases
 that the model considers similar.
When you're happy with the result, click Export embeddings to
 JSON to get a JSON file that contains a
 1024-dimensional vector for each input.
(Optional) 
 Click View code, select a programming language, click
 Copy code, and paste the code into a file. Ensure that the file
 maintains the format of the pasted code.
 Tip If you're using the code in your applications, ensure that you authenticate your code.
(Optional) 
 Click Clear to remove all the sentences and start generating
 embeddings for new sentences.
 Note When you click Clear the Truncate
 parameter resets to its default value of None.
To create embeddings for text, use the embed-text-result operation.
Enter the following command for a list of options to create text embeddings.
oci generative-ai-inference embed-text-result embed-text -h
For a complete list of parameters and values for the OCIGenerative AI CLI commands, see Generative AI Inference CLI and Generative AI Management CLI.
 
Run the EmbedText operation to create text embeddings.
For information about using the API and signing requests, see REST API documentation and Security Credentials. For information about SDKs, see SDKs and the CLI.
