caffe <COMMAND> <FLAGS>
DESCRIPTIONCaffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and community contributors.
- train or finetune a model
- score a model
- show GPU diagnostic information
- benchmark model execution time
FREQUENTLY USED FLAGS
- (Optional; run in GPU mode on given device IDs separated by ','. Use '-gpu all' to run on all available GPUs. The effective training batch size is multiplied by the number of devices.) type: string default: ""
- (The number of iterations to run.) type: int32 default: 50
- (The model definition protocol buffer text file..) type: string default: ""
- (Optional; action to take when a SIGHUP signal is received: snapshot, stop or none.) type: string default: "snapshot"
- (Optional; action to take when a SIGINT signal is received: snapshot, stop or none.) type: string default: "stop"
- (Optional; the snapshot solver state to resume training.) type: string default: ""
- (The solver definition protocol buffer text file.) type: string default: ""
- (Optional; the pretrained weights to initialize finetuning, separated by ','. Cannot be set simultaneously with snapshot.) type: string default: ""
- Show complete help messages.
OTHER CAFFE UTILITIESApart from the "caffe" command line utility, there are also some utilities available, run them with "-h" or "--help" argument to see corresponding help.
EXAMPLESTrain a new Network
$ caffe train -solver solver.prototxtFine-tune a network
$ caffe train -solver solver.prototxt -weights pre_trained.caffemodelTest (evaluate) a trained model for 100 iterations, on GPU 0
$ caffe test -model train_val.prototxt -weights bvlc_alexnet.caffemodel -gpu 0 -iterations 100Run a benchmark against AlexNet on GPU 0
$ caffe time -model deploy.prototxt -gpu 0Check CUDA device availability of GPU 0
$ caffe device_query -gpu 0
AUTHORThis manpage is written by Zhou Mo <[email protected]> with the help of txt2man for Debian according to program's help message.