Understanding the Pipeline Endpoint
Introduction
In Layar 2.0 the /layar/v2/gpt/pipeline and /layar/v2/gpt/pipeline/run were introduced which allow you configure pipelines that can be called via an Layar ID. These pipelines can be used to call canned prompt chains, allowing for more intricate data extraction.
Assembling The Pipeline
The pipeline is a JSON dictionary with 3 primary values name , inputParameterNames , nodes .
{
"name": "ds-template-2",
"inputParameterNames": ["prompt"],
"nodes": [
{
"name": "search_node",
"type": "LAYARSEARCH",
"dependencies": [],
"nodeConfig": {
"savedListIds": [SAVED_LIST_ID],
"resultFormat": "DOCUMENT"
},
},
{
"name": "llm_node",
"type": "LLM",
"dependencies": ["search_node"],
"nodeConfig": {
"promptTemplate": "${prompt}",
"model": "gpt-oss-120b",
"sources": ["${search_node}"],
},
},
],
}name
nameThis is a string value containing the name of the pipeline.
inputParameterNames
inputParameterNamesA list of strings indicating the variables that must be supplied with the call that runs the pipeline.
nodes
nodesA list of JSON dictionaries. It contains name , type , dependencies , and nodeConfig .
name
nameA string value containing the name of the node.
type
typeA string value containing the type of node the entry will be. There are only three node types currently LAYARSEARCH , LLM , SHAREPOINTSEARCH .
dependencies
dependenciesA list of strings containing the node names the node relies on. This is how you determine where in the chain the node exists. If no dependencies are listed, these nodes will run first in the pipeline.
nodeConfig
nodeConfigA JSON dictionary that contains the configuration for the node. Depending on what sort of node you are configuring, the values in the dictionary will be different.
nodeConfig Values
Both the search and llm node utilizes the same values that can be used with the default search and generate endpoints.
The only different values are as follows.
LLM:
promptTemplate- The prompt that will be used in the node.
sources- A list containing strings of Layar document IDs. In most cases will be supplied by the LAYARSEARCH node.LAYARSEARCH:
resultFormat- A string value containing eitherRAWTEXTorDOCUMENTwhich determines if the node returns layar ids of the documents OR raw text of the document.
Using inputParameters in nodeConfig
You can utilize
inputParametersinnodeConfigas a way to insert variables into your pipeline. For the example above, the onlyinputParameterbeing used isprompt. Inside thenodeConfig,promptTemplatecontains${prompt}. This means that when the template is run, it will take the supplied prompt and insert it into thepromptTemplatefield. This can be done with other sections of thenodeConfigas well.
Creating the Pipeline
Once you have the template created, you POST to /layar/v2/gpt/pipeline which will return an id that points back to that is linked to the data fabric you created it on.
SAVED_LIST_ID = "AZxJZq7GX7kSH_xsJJeY" # fill in your saved list ID
PROMPT = "summarize the document" # ${search_node} is replaced with search results at run time
TEMPLATE = {
"name": "ds-template-2",
"inputParameterNames": ["prompt"],
"nodes": [
{
"name": "search_node",
"type": "LAYARSEARCH",
"dependencies": [],
"nodeConfig": {
"savedListIds": [SAVED_LIST_ID],
"resultFormat": "DOCUMENT"
},
},
{
"name": "llm_node",
"type": "LLM",
"dependencies": ["search_node"],
"nodeConfig": {
"promptTemplate": "${prompt}",
"model": "gpt-oss-120b",
"sources": ["${search_node}"],
},
},
],
}
# Values for any ${variable} placeholders in promptTemplate; empty if none
DATA_FABRIC = '1031' # override if needed, e.g. "my-fabric-name"
# =============================================================================
templateUrl = f"{envUrl}/layar/v2/gpt/pipeline"
token = requests.post(
f"{envUrl}/connect/oauth/token",
headers={"Accept": "application/json"},
auth=HTTPBasicAuth(clientId, clientSecret),
params={"grant_type": "client_credentials", "scope": "read write"},
).json().get("access_token")
headers = {
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {token}",
"X-Vyasa-Data-Fabric": DATA_FABRIC,
}
def create_template():
import json
print("Sending template:", json.dumps(TEMPLATE, indent=2))
resp = requests.post(templateUrl, headers=headers, json=TEMPLATE)
if not resp.ok:
print("Create error:", resp.text)
resp.raise_for_status()
body = resp.json()
template_id = body.get("id")
if not template_id:
raise ValueError(f"No id in create response: {body}")
print(f"Template created (id={template_id})")
return template_idRunning the Pipeline
The ID generated by creating the pipeline can be used going forward to call the pipeline to run with a POST to /layar/v2/gpt/pipeline/run.
{
"templateId": template_id,
"inputParameters": {'prompt': 'Your_prompt_here'},
"forceRestart": False,
"outputAll": False,
}templateId
templateIdA string value containing the ID received when creating the pipeline.
inputParameters
inputParametersA JSON dictionary that contains the values you want to insert into the pipeline.
forceRestart
forceRestartA boolean value that determines if the pipeline will restart automatically if it is currently running.
outputAll
outputAllA boolean value that determines if the pipeline will return results from each node or only the final product produced from the last node.
Retrieving the Output
When you run the pipeline, it will output an ID. This ID can be used in a GET to /layar/gpt/pipeline/async/{request_id} endpoint. This endpoint can be polled until to see if the pipeline has finished processing and get the results.
poll_url = f"{envUrl}/layar/gpt/pipeline/async/{request_id}"
print(f"Polling (requestId={request_id}) ...")
while True:
time.sleep(2)
poll_resp = requests.get(poll_url, headers=headers)
if poll_resp.status_code == 200:
return poll_resp.json()
if poll_resp.status_code == 202:
continue
return poll_resp.json()
return bodyUpdated about 7 hours ago
