API Reference / InstantSearch.js Widgets / voiceSearch
Apr. 24, 2019

voiceSearch

Widget signature
instantsearch.widgets.voiceSearch({
  container: string|HTMLElement,
  // Optional parameters
  searchAsYouSpeak: boolean,
  templates: object,
  cssClasses: object,
});

About this widget

The voiceSearch widget lets the user perform a voice-based query.

It uses the Web Speech API, which only Chrome (from version 25) has implemented so far. This means the voiceSearch widget only works on desktop Chrome and Android Chrome. It doesn’t work on iOS Chrome, which uses the iOS WebKit.

Examples

1
2
3
instantsearch.widgets.voiceSearch({
  container: '#voicesearch',
});

Options

container
type: string|HTMLElement
Required

The CSS Selector or HTMLElement to insert the widget into.

1
2
3
instantsearch.widgets.voiceSearch({
  container: '#voicesearch',
});
searchAsYouSpeak
type: boolean
default: false
Optional

Whether or not to trigger the search as you speak. If false, search is triggered only after speech is finished. If true, search is triggered whenever the engine delivers an interim transcript.

1
2
3
4
instantsearch.widgets.voiceSearch({
  // ...
  searchAsYouSpeak: true,
});
templates
type: object
Optional

The templates to use for the widget.

1
2
3
4
5
6
instantsearch.widgets.voiceSearch({
  // ...
  templates: {
    // ...
  },
});
cssClasses
type: object

The CSS classes to override.

  • root: the root element of the widget.
  • button: the button element.
  • status: the status element.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
instantsearch.widgets.voiceSearch({
  // ...
  cssClasses: {
    root: 'MyCustomVoiceSearch',
    button: [
      'MyCustomVoiceSearchButton',
      'MyCustomVoiceSearchButton--subclass',
    ],
    status: [
      'MyCustomVoiceSearchStatus',
      'MyCustomVoiceSearchStatus--subclass',
    ]
  },
});

Templates

buttonText
type: string|function
Optional

The template used for displaying the button.

1
2
3
4
5
6
instantsearch.widgets.voiceSearch({
  // ...
  templates: {
    buttonText: '🎙',
  },
});
status
type: string|function
Optional

The template used for displaying the status.

1
2
3
4
5
6
7
8
9
10
11
12
13
instantsearch.widgets.voiceSearch({
  // ...
  templates: {
    status: `
      <p>status: {{status}}</p>
      <p>errorCode: {{errorCode}}</p>
      <p>isListening: {{isListening}}</p>
      <p>transcript: {{transcript}}</p>
      <p>isSpeechFinal: {{isSpeechFinal}}</p>
      <p>isBrowserSupported: {{isBrowserSupported}}</p>
    `,
  },
});

Customize the UI - connectVoiceSearch

If you want to create your own UI of the voiceSearch widget, you can use connectors.

It’s a 3-step process:

// 1. Create a render function
const renderVoiceSearch = (renderOptions, isFirstRender) => {
  // Rendering logic
};

// 2. Create the custom widget
const customVoiceSearch = instantsearch.connectors.connectVoiceSearch(
  renderVoiceSearch
);

// 3. Instantiate
search.addWidget(
  customVoiceSearch({
    // instance params
  });
);

Create a render function

This rendering function is called before the first search (init lifecycle step) and each time results come back from Algolia (render lifecycle step).

const renderVoiceSearch = (renderOptions, isFirstRender) => {
  const {
    boolean isBrowserSupported,
    boolean isListening,
    function toggleListening,
    object voiceListeningState,
    object widgetParams,
  } = renderOptions;

  if (isFirstRender) {
    // Do some initial rendering and bind events
  }

  // Render the widget
}

Render options

isBrowserSupported
type: boolean

true if user’s browser supports voice search.

1
2
3
4
5
6
7
8
9
10
11
const renderVoiceSearch = (renderOptions, isFirstRender) => {
  const { isBrowserSupported } = renderOptions;

  const container = document.querySelector('#voicesearch');

  if (isFirstRender && !isBrowserSupported) {
    const message = document.createElement('p');
    message.innerText = 'This browser is not supported.';
    container.appendChild(message);
  }
};
toggleListening
type: function

Starts listening to user’s speech, or stops it if already listening.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
const renderVoiceSearch = (renderOptions, isFirstRender) => {
  const { toggleListening } = renderOptions;

  const container = document.querySelector('#voicesearch');

  if (isFirstRender) {
    const button = document.createElement('button');
    button.textContent = 'Toggle';

    button.addEventListener('click', event => {
      toggleListening();
    })

    container.appendChild(button);
  }
};
isListening
type: boolean

true if listening to user’s speech.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
const renderVoiceSearch = (renderOptions, isFirstRender) => {
  const { isListening, toggleListening } = renderOptions;

  const container = document.querySelector('#voicesearch');

  if (isFirstRender) {
    const button = document.createElement('button');
    button.textContent = 'Toggle';

    button.addEventListener('click', event => {
      toggleListening();
    })

    container.appendChild(button);
  }

  container.querySelector('button').textContent =
    isListening ? 'Stop' : 'Start';
};
voiceListeningState
type: boolean

An object containing the following states regarding speech recognition:

  • status: string: current status (initial|askingPermission| waiting|recognizing|finished|error).
  • transcript: string: currently recognized transcript.
  • isSpeechFinal: boolean: true if speech recognition is finished.
  • errorCode: string|undefined: an error code (if any). Refer to the spec for more information.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
const renderVoiceSearch = (renderOptions, isFirstRender) => {
  const { voiceListeningState, toggleListening } = renderOptions;
  const {
    status,
    transcript,
    isSpeechFinal,
    errorCode,
  } = voiceListeningState;

  const container = document.querySelector('#voicesearch');

  if (isFirstRender) {
    const state = document.createElement('div');
    state.innerHTML = `
      <p>status : <span class="status"></span></p>
      <p>transcript : <span class="transcript"></span></p>
      <p>isSpeechFinal : <span class="is-speech-final"></span></p>
      <p>errorCode : <span class="error-code"></span></p>
    `;
    container.appendChild(state);

    const button = document.createElement('button');
    button.textContent = 'Toggle';
    button.addEventListener('click', event => {
      toggleListening();
    })
    container.appendChild(button);
  }
  container.querySelector('.status').innerText = status;
  container.querySelector('.transcript').innerText = transcript;
  container.querySelector('.is-speech-final').innerText = isSpeechFinal;
  container.querySelector('.error-code').innerText = errorCode || '';
};
widgetParams
type: object

All original widget options forwarded to the render function.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
const renderVoiceSearch = (renderOptions, isFirstRender) => {
  const { widgetParams } = renderOptions;

  widgetParams.container.innerHTML = '...';
};

const customVoiceSearch = instantsearch.connectors.connectVoiceSearch(
  renderVoiceSearch
);

search.addWidget(
  customVoiceSearch({
    container: document.querySelector('#voicesearch'),
  })
);

Create and instantiate the custom widget

We first create custom widgets from our rendering function, then we instantiate them. When doing that, there are two types of parameters you can give:

  • Instance parameters: they are predefined parameters that you can use to configure the behavior of Algolia.
  • Your own parameters: to make the custom widget generic.

Both instance and custom parameters are available in connector.widgetParams, inside the renderFunction.

const customVoiceSearch = instantsearch.connectors.connectVoiceSearch(
  renderVoiceSearch
);

search.addWidget(
  customVoiceSearch({
    // Optional parameters
    searchAsYouSpeak: boolean,
  })
);

Instance options

searchAsYouSpeak
type: boolean
default: false
Optional

Whether or not to trigger the search as you speak. If false, search is triggered only after speech is finished. If true, search is triggered as many times as the engine delivers an interim transcript.

1
2
3
customVoiceSearch({
  searchAsYouSpeak: true,
});

Full example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// Create a render function
const renderVoiceSearch = (renderOptions, isFirstRender) => {
  const { isListening, toggleListening, voiceListeningState } = renderOptions;

  const container = document.querySelector('#voicesearch');

  if (isFirstRender) {
    const button = document.createElement('button');
    button.addEventListener('click', event => {
      toggleListening();
    })
    container.appendChild(button);

    const state = document.createElement('pre');
    container.appendChild(state)
  }

  container.querySelector('button').textContent =
    isListening ? 'Stop' : 'Start';

  container.querySelector('pre').textContent =
    JSON.stringify(voiceListeningState, null, 2);
};

// create custom widget
const customVoiceSearch = instantsearch.connectors.connectVoiceSearch(
  renderVoiceSearch
);

// instantiate custom widget
search.addWidget(
  customVoiceSearch({
    container: document.querySelector('#voicesearch'),
  })
);

HTML output

1
2
3
4
5
6
7
8
<div class="ais-VoiceSearch">
  <button class="ais-VoiceSearch-button" type="button" title="Search by voice">
    ...
  </button>
  <div class="ais-VoiceSearch-status">
    ...
  </div>
</div>

Did you find this page helpful?