Use Chrome's Prompt API to generate a trip planner in Angular

In this blog post, I describe how to build a trip planner application locally using Chrome’s Built-In Prompt API and Angular. The Angular application calls the Prompt API to create a language model and submits queries to Gemini Nano to to provide details such as applying for a travel visa, the clothes to pack, and attractions to visit each day. The benefit of using Chrome’s built-in AI is zero cost since the application uses the local models in Chrome Canary. This is the happy path when users use Chrome Dev or Chrome Canary. If users use non-Chrome or old Chrome browsers, a fallback implementation should be available, such as calling Gemma or Gemini on Vertex AI to return the correct sentiment. Install Gemini Nano on Chrome Update the Chrome Dev/Canary to the latest version. As of this writing, the newest version of Chrome Canary is 133. Please refer to this section to sign up for the early preview program of Chrome Built-in AI. https://developer.chrome.com/docs/ai/built-in#get_an_early_preview Please refer to this section to enable Gemini Nano on Chrome and download the model. https://developer.chrome.com/docs/ai/get-started#use_apis_on_localhost Disable text safety classifier on Chrome (Local Development) Go to chrome://flags/#text-safety-classifier. (Local Development) Select Disabled Click Relaunch or restart Chrome. Scaffold an Angular Application ng new prompt-api-demo Install dependencies npm i -save-exact -save-dev @types/dom-chromium-ai This dependency provides the TypeScript typing of all the Chrome Built-in APIs. Therefore, developers can write elegant codes to build AI applications in TypeScript. In main.ts, add a reference tag to point to the package's typing definition file. // main.ts /// Bootstrap the language model import { InjectionToken } from '@angular/core'; export const AI_PROMPT_API_TOKEN = new InjectionToken('AI_PROMPT_API_TOKEN'); export function provideLanguageModel(): EnvironmentProviders { return makeEnvironmentProviders([ { provide: AI_PROMPT_API_TOKEN, useFactory: () => { const platformId = inject(PLATFORM_ID); const objWindow = isPlatformBrowser(platformId) ? window : undefined; return objWindow?.ai?.languageModel; }, } ]); } I define environment providers to return the languageModel in the window.ai namespace. When the codes inject the AI_LANGUAGE_PROMPT_API_TOKEN token, they can access the Prompt API to call its’ methods to submit queries to the Gemini Nano. // app.config.ts export const appConfig: ApplicationConfig = { providers: [ provideLanguageModel() ] }; In the application config, provideLanguageModel is imported into the providers array. Validate browser version and API availability Chrome built-in AI is in experimental status, and the Prompt API is supported in Chrome version 131 and later. Therefore, I implement validation logic to ensure the API is available before displaying the user interface so users can enter texts. The validation rules include: Browser is Chrome Browser version is at least 131 ai Object is in the window namespace Prompt API’s status is readily export async function checkChromeBuiltInAI(): Promise { if (!isChromeBrowser()) { throw new Error(ERROR_CODES.NOT_CHROME_BROWSER); } if (getChromVersion() { console.error(e); return of(e instanceof Error ? e.message : 'unknown'); } ) ); } The isPromptApiSupported function catches the error and returns an Observable of error message. Display the AI components @Component({ selector: 'app-detect-ai', imports: [PromptShowcaseComponent], template: ` @let error = hasCapability(); @if (!error) { } @else if (error !== 'unknown') { {{ error }} } ` }) export class DetectAIComponent { hasCapability = toSignal(isPromptAPISupported(), { initialValue: '' }); } The DetectAIComponent renders the PromptShowcaseComponent where there is no error. Otherwise, it displays the error message in the error signal. // prompt-showcase.component.ts @Component({ selector: 'app-prompt-showcase', imports: [NgComponentOutlet], template: ` @let outlet = componentOutlet(); `, changeDetection: ChangeDetectionStrategy.OnPush }) export class PromptShowcaseComponent { promptService = inject(ZeroPromptService); componentOutlet = computed(() => { return { component: SystemPromptsComponent, inputs: {} } }); } The PromptShowcaserComponent renders the SystemPromptsComponent dynamically. Prompt Response Component @Component({ selector: 'app-prompt-response', imports: [TokenizationComponent, FormsModule, LineBreakPipe, NgTemplateOutlet], template: ` @let responseS

Jan 15, 2025 - 14:25
Use Chrome's Prompt API to generate a trip planner in Angular

In this blog post, I describe how to build a trip planner application locally using Chrome’s Built-In Prompt API and Angular. The Angular application calls the Prompt API to create a language model and submits queries to Gemini Nano to to provide details such as applying for a travel visa, the clothes to pack, and attractions to visit each day.

The benefit of using Chrome’s built-in AI is zero cost since the application uses the local models in Chrome Canary. This is the happy path when users use Chrome Dev or Chrome Canary. If users use non-Chrome or old Chrome browsers, a fallback implementation should be available, such as calling Gemma or Gemini on Vertex AI to return the correct sentiment.

Install Gemini Nano on Chrome

Update the Chrome Dev/Canary to the latest version. As of this writing, the newest version of Chrome Canary is 133.

Please refer to this section to sign up for the early preview program of Chrome Built-in AI.
https://developer.chrome.com/docs/ai/built-in#get_an_early_preview

Please refer to this section to enable Gemini Nano on Chrome and download the model. https://developer.chrome.com/docs/ai/get-started#use_apis_on_localhost

Disable text safety classifier on Chrome

  1. (Local Development) Go to chrome://flags/#text-safety-classifier.
  2. (Local Development) Select Disabled
  3. Click Relaunch or restart Chrome.

Scaffold an Angular Application

ng new prompt-api-demo

Install dependencies

npm i -save-exact -save-dev @types/dom-chromium-ai

This dependency provides the TypeScript typing of all the Chrome Built-in APIs. Therefore, developers can write elegant codes to build AI applications in TypeScript.

In main.ts, add a reference tag to point to the package's typing definition file.

// main.ts

///    

Bootstrap the language model

import { InjectionToken } from '@angular/core';

export const AI_PROMPT_API_TOKEN = new InjectionToken<AILanguageModelFactory | undefined>('AI_PROMPT_API_TOKEN');
export function provideLanguageModel(): EnvironmentProviders {
   return makeEnvironmentProviders([
       {
           provide: AI_PROMPT_API_TOKEN,
           useFactory: () => {
               const platformId = inject(PLATFORM_ID);
               const objWindow = isPlatformBrowser(platformId) ? window : undefined;
               return  objWindow?.ai?.languageModel;
           },
       }
   ]);
}

I define environment providers to return the languageModel in the window.ai namespace. When the codes inject the AI_LANGUAGE_PROMPT_API_TOKEN token, they can access the Prompt API to call its’ methods to submit queries to the Gemini Nano.

// app.config.ts

export const appConfig: ApplicationConfig = {
  providers: [
    provideLanguageModel()
  ]
};

In the application config, provideLanguageModel is imported into the providers array.

Validate browser version and API availability

Chrome built-in AI is in experimental status, and the Prompt API is supported in Chrome version 131 and later. Therefore, I implement validation logic to ensure the API is available before displaying the user interface so users can enter texts.

The validation rules include:

  • Browser is Chrome
  • Browser version is at least 131
  • ai Object is in the window namespace
  • Prompt API’s status is readily
export async function checkChromeBuiltInAI(): Promise<string> {
  if (!isChromeBrowser()) {
     throw new Error(ERROR_CODES.NOT_CHROME_BROWSER);
  }

  if (getChromVersion() < CHROME_VERSION) {
     throw new Error(ERROR_CODES.OLD_BROWSER);
  }

  if (!('ai' in globalThis)) {
     throw new Error(ERROR_CODES.NO_PROMPT_API);
  }

  const assistant = inject(AI_PROMPT_API_TOKEN);
  const status = (await assistant?.capabilities())?.available;
  if (!status) {
     throw new Error(ERROR_CODES.API_NOT_READY);
  } else if (status === 'after-download') {
     throw new Error(ERROR_CODES.AFTER_DOWNLOAD);
  } else if (status === 'no') {
     throw new Error(ERROR_CODES.NO_LARGE_LANGUAGE_MODEL);
  }

  return '';
}

The checkChromeBuiltInAI function ensures the Prompt API is defined and ready to use. If checking fails, the function throws an error. Otherwise, it returns an empty string.

export function isPromptAPISupported(): Observable<string> {
  return from(checkChromeBuiltInAI()).pipe(
     catchError(
        (e) => {
           console.error(e);
           return of(e instanceof Error ? e.message : 'unknown');
        }
     )
  );
}

The isPromptApiSupported function catches the error and returns an Observable of error message.

Display the AI components

@Component({
    selector: 'app-detect-ai',
    imports: [PromptShowcaseComponent],
    template: `
    
@let error = hasCapability(); @if (!error) { } @else if (error !== 'unknown') { {{ error }} }
`
}) export class DetectAIComponent { hasCapability = toSignal(isPromptAPISupported(), { initialValue: '' }); }

The DetectAIComponent renders the PromptShowcaseComponent where there is no error. Otherwise, it displays the error message in the error signal.

// prompt-showcase.component.ts 

@Component({
   selector: 'app-prompt-showcase',
   imports: [NgComponentOutlet],
   template: `
       @let outlet = componentOutlet();
       
   `,
   changeDetection: ChangeDetectionStrategy.OnPush
})
export class PromptShowcaseComponent {
   promptService = inject(ZeroPromptService);
   componentOutlet = computed(() => {  
      return {
        component: SystemPromptsComponent,
        inputs: {}
      }
   });
}

The PromptShowcaserComponent renders the SystemPromptsComponent dynamically.

Prompt Response Component

@Component({
 selector: 'app-prompt-response',
 imports: [TokenizationComponent, FormsModule, LineBreakPipe, NgTemplateOutlet],
 template: `
   @let responseState = state();
   
Prompt:
Response:

`
, changeDetection: ChangeDetectionStrategy.OnPush }) export class PromptResponseComponent { state = input.required<PromptResponse>(); query = model.required<string>(); submitPrompt = output(); }

The PromptResponseComponent displays a text area where users can enter a query. Then, they click the button to submit the query to the internal Gemini Nano, which generates a text answer. The submitPrompt output function notifies the SystemPromptComponent component that a user query has been submitted. Finally, the LineBreakPipe pipe cleanses the response before displaying it.

System Prompts Component

// system-prompts.component.ts

@Component({
   selector: 'app-system-prompt',
   imports: [FormsModule, PromptResponseComponent],
   template: `
   

System Prompts

System Prompt:
`
, styleUrl: './prompt.component.css', providers: [ { provide: AbstractPromptService, useClass: SystemPromptService, } ], changeDetection: ChangeDetectionStrategy.OnPush }) export class SystemPromptsComponent extends BasePromptComponent { systemPrompt = signal(``); responseState = computed<PromptResponse>(() => ({ ...this.state(), error: this.error(), response: this.response(), })); constructor() { super(); this.query.set(''); this.promptService.setPromptOptions({ systemPrompt: this.systemPrompt() }); } }

The SystemPromptsComponent displays a text area for users to update the system prompt to describe the context of the problem. The PromptResponseComponent component allows users to input their queries and displays the results. The systemPrompt signal stores the system prompt, instructing the Gemini Nano how to behave when answering the user query.

constructor() {
   super();
   this.query.set(`);
   this.promptService.setPromptOptions({ systemPrompt: this.systemPrompt() });
 }

The component's constructor sets the query's initial value and calls the SystemPromptService to update the system prompt of the Prompt API.

systemPrompt = signal(`You are a professional trip planner who helps travelers to plan a trip to a location. When a traveler specifies a country or city, you have to recommend how to apply for a travel visa, pack suitable clothes for the weather and essentials, and list the known attractions to visit daily. It is preferred to visit two to three attractions each day to maximize the value of the trip. If you don't know the answer, say, "I do not know the answer."`);

In this demo, the Gemini Nano is a professional trip planner that helps travelers plan a trip to a foreign country. The system prompt instructs the LLM to provide details on travel visas, clothes to wear, and different attractions to visit during the trip.

this.query.set('I will visit from Hong Kong to Taipei between Feb 13th to Feb 18th. Please help me plan the trip and assume I will arrive in the afternoon on day 1.'); 

The user will travel to Taipei for 6 days in February and asks Gemini Nano to plan the trip.

Base Component

@Directive({
   standalone: false
})
export abstract class BasePromptComponent {
   promptService = inject(AbstractPromptService);
   session = this.promptService.session;

   isLoading = signal(false);
   error = signal('');
   query = signal('Tell me about the job responsibility of an A.I. engineer, maximum 500 words.');
   response = signal('');

   state = computed(() => {
       const isLoading = this.isLoading();
       const isUnavailableForCall = isLoading || this.query().trim() === '';
       return {
           status: isLoading ? 'Processing...' : 'Idle',
           text: isLoading ? 'Progressing...' : 'Submit',
           disabled: isLoading,
           submitDisabled: isUnavailableForCall
       }
   });

    async submitPrompt() {
     try {
       this.isLoading.set(true);
       this.error.set('');
       this.response.set('');
       const answer = await this.promptService.prompt(this.query());
       this.response.set(answer);
     } catch(e) {
       const errMsg = e instanceof Error ? (e as Error).message : 'Error in submitPrompt';
       this.error.set(errMsg);
     } finally {
       this.isLoading.set(false);
     }
   }
 }

The BasePromptComponent provides the submit functionality and signals to hold the query, response, and view states.

The submitPrompt method submits the query to Gemini Nano to generate texts and assign them to the response signal. When the LLM is occupied, the isLoading signal is set to true, and the UI elements (text area and button) become disabled. When the signal is set to false, the UI elements are enabled.

Define a service layer over the Prompt API

The SystemPromptService service encapsulates the logic of the Prompt API.

The createPromptSession creates a session with a system prompt. When the service is destroyed, the ngOnDestroy method destroys the session to avoid memory leaks.

@Injectable({
 providedIn: 'root'
})
export class SystemPromptService extends AbstractPromptService implements OnDestroy  {
 #controller = new AbortController();

 override async createPromptSession(options?: PromptOptions): Promise<AILanguageModel | undefined> {
   const { systemPrompt = undefined } = options || {};
   return this.promptApi?.create({ systemPrompt, signal: this.#controller.signal });
 }

 ngOnDestroy(): void {
   this.destroySession();
 }
}

The AbtractPromptService defines standard methods other prompt services can inherit.

The createSessionIfNotExists method creates a session and keeps it in the #session signal for reuse. A session is recreated when the old one has very few tokens remaining (< 500 tokens).

export abstract class AbstractPromptService {
   promptApi = inject(AI_PROMPT_API_TOKEN);
   #session = signal<AILanguageModel | undefined>(undefined);
   #tokenContext = signal<Tokenization | null>(null);
   #options = signal<PromptOptions | undefined>(undefined);

   resetSession(newSession: AILanguageModel | undefined) {
       this.#session.set(newSession);
       this.#tokenContext.set(null);
   }

   shouldCreateSession() {
       const session = this.#session();
       const context = this.#tokenContext();
       return !session || (context && context.tokensLeft < 500);
   }

   setPromptOptions(options?: PromptOptions) {
       this.#options.set(options);
   }

   async createSessionIfNotExists(): Promise<void> {
     if (this.shouldCreateSession()) {
        this.destroySession();
        const newSession = await this.createPromptSession(this.#options());
        if (!newSession) {
           throw new Error('Prompt API failed to create a session.');      
        }
        this.resetSession(newSession);
     }
   }
}

The abstract createPromptSession method allows concrete services to implement their own sessions. A session can have zero prompt, a system prompt, or an array of initial prompts.

abstract createPromptSession(options?: PromptOptions): Promise<AILanguageModel | undefined>;

The prompt method creates a session when one does not exist. The session then accepts a query to generate and return the texts.

async prompt(query: string): Promise<string> {
       if (!this.promptApi) {
           throw new Error(ERROR_CODES.NO_PROMPT_API);
       }

       await this.createSessionIfNotExists();
       const session = this.#session();
       if (!session) {
           throw new Error('Session does not exist.');      
       }
       const answer = await session.prompt(query);
       return answer;
}

The destroySession method destroys the session and resets the signals in the service.

destroySession() {
    const session = this.session();

    if (session) {
        session.destroy();
        console.log('Destroy the prompt session.');
        this.resetSession(undefined);
    }
}

In conclusion, software engineers can create Web AI applications without setting up a backend server or accumulating the costs of LLM on the cloud.

Resources: