Deep Learning taiwanese recognization homework 1 at NTUT.
Step1. go to kaldi workspace
$ cd ~/kaldi/egs/kakdi_taiwanese/s5
Step2. create bash file with convert audo sample rate .
$ gedit wav22k_16k.sh
$ chmod +x wav22k_16k.sh
$ ./wav22k_16k.sh
result: generate 16k audo wave file to train/wav and test/wav.
step3. create python file: train.csv convert to train.txt
$ chmod +x HW1_csv2text.py
$ pyhton HW1_csv2text.py
step4. copy to lexicon dir after lexicon.txt delete last raw, copy to data/train and data/test dir after train.txt rename is text .
step5. edit run_tdnn.sh, set network and gpu num.
$ gedit local/chain/run_tdnn.sh
step6. run run.sh
$ chmod +x run.sh
$ ./run.sh