[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: MCL interface for speech recog on AV Macs?



In article <m0ok76D-000005C@lamd01> ranson@LANNION.cnet.fr (Ranson) writes:
>When Plaintalk recognizes a command, it uses a rule-based system to select an
>AppleScript (more generically OSA) script to execute. The rules and the scripts
>may be provided with an application, or writtent by the user.
>So the trick is to support AppleScript. If you don't, you can use QuicKeys 3.0,
>that is OSA-compliant.
>     Daniel.
>

  That's not what I meant by a programmatic interface.  How does the
recognition software know what words to expect to hear.  Its vocabulary
is not infinite.  If a lexicon can be specified, how does one do it?
Does it rely on some sort of Markov model to generate expectations as
to which word(s) will come next in the speech stream?  If so, what are
the toolbox calls to specify the model?  Can it only recognize commands?
Can it be controlled midstream while input is coming in?  If so, what are
the calls to control it?  Finally, are there (existing or planned) MCL
hooks into it?

  ...bill

-- 
   / Bill Andersen (waander@cs.umd.edu) /
  / University of Maryland             /
 / Department of Computer Science     /
/ College Park, Maryland  20742      /