i training py-faster-rcnn following instructions https://github.com/deboc/py-faster-rcnn/blob/master/help/readme.md on custom dataset.
however, getting following error :
preparing training data... process process-1: traceback (most recent call last): file "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap self.run() file "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run self._target(*self._args, **self._kwargs) file "./train_faster_rcnn_alt_opt.py", line 122, in train_rpn roidb, imdb = get_roidb(imdb_name) file "./train_faster_rcnn_alt_opt.py", line 67, in get_roidb roidb = get_training_roidb(imdb) file "/home/work/code/py-faster-rcnn/tools/../lib/fast_rcnn/train.py", line 122, in get_training_roidb rdl_roidb.prepare_roidb(imdb) file "/home/work/code/py-faster-rcnn/tools/../lib/roi_data_layer/roidb.py", line 31, in prepare_roidb gt_overlaps = roidb[i]['gt_overlaps'].toarray() attributeerror: 'numpy.ndarray' object has no attribute 'toarray'
this code-snippet of roidb.py (line 31):
for in xrange(len(imdb.image_index)): roidb[i]['image'] = imdb.image_path_at(i) roidb[i]['width'] = sizes[i][0] roidb[i]['height'] = sizes[i][1] # need gt_overlaps dense array argmax gt_overlaps = roidb[i]['gt_overlaps'].toarray()
i unable find way around this.
in standard implementation, roidb[i]['gt_overlaps']
on posted line expected sparse matrix. forgot when defining own dataset ground truth reader because it's numpy.ndarray later converted. other workarounds possible (and perhaps preferable depending on application) if want remain close py-faster-rcnn possible, wrap gt_overlaps
key of each ground truth overlaps = scipy.sparse.csr_matrix(overlaps)
, further reference can found in original dataset files (i.e. lib/datasets/pascal_voc.py
).
Comments
Post a Comment