更新 readme.md
This commit is contained in:
parent
cddc5f4e1d
commit
47b9279725
@ -14,4 +14,4 @@ We derived the image embeddings by using a CLIP encoder and mapping it with the
|
||||
|
||||
Replace **image-dir** and **llava-ckpt** to your **test image folder addr** and **pytorch_model-00003-of-00003.bin addr** and run:
|
||||
|
||||
`python convert_images_to_vectors.py --image-dir ./datasets/coco/val2017 --output-dir imgVecs --vision-model openai/clip-vit-large-patch14-336 --proj-dim 5120 --llava-ckpt ./datasets/pytorch_model-00003-of-00003.bin --batch-size 64`
|
||||
`python starter.py --image-dir ./datasets/coco/val2017 --output-dir imgVecs --vision-model openai/clip-vit-large-patch14-336 --proj-dim 5120 --llava-ckpt ./datasets/pytorch_model-00003-of-00003.bin --batch-size 64`
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user