[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: MCL interface for speech recog on AV Macs?
- To: firstname.lastname@example.org
- Subject: Re: MCL interface for speech recog on AV Macs?
- From: email@example.com (Bill Andersen)
- Date: 5 Oct 1993 22:40:34 -0400
- Distribution: world
- Newsgroups: comp.lang.lisp.mcl
- Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742
- References: <m0ok76D-000005C@lamd01>
In article <m0ok76D-000005C@lamd01> ranson@LANNION.cnet.fr (Ranson) writes:
>When Plaintalk recognizes a command, it uses a rule-based system to select an
>AppleScript (more generically OSA) script to execute. The rules and the scripts
>may be provided with an application, or writtent by the user.
>So the trick is to support AppleScript. If you don't, you can use QuicKeys 3.0,
>that is OSA-compliant.
That's not what I meant by a programmatic interface. How does the
recognition software know what words to expect to hear. Its vocabulary
is not infinite. If a lexicon can be specified, how does one do it?
Does it rely on some sort of Markov model to generate expectations as
to which word(s) will come next in the speech stream? If so, what are
the toolbox calls to specify the model? Can it only recognize commands?
Can it be controlled midstream while input is coming in? If so, what are
the calls to control it? Finally, are there (existing or planned) MCL
hooks into it?
/ Bill Andersen (firstname.lastname@example.org) /
/ University of Maryland /
/ Department of Computer Science /
/ College Park, Maryland 20742 /