Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: cannot reshape array of size 0 into shape (0,newaxis) #11

Open
paulaceccon opened this issue Dec 21, 2020 · 5 comments
Open

Comments

@paulaceccon
Copy link

This might not be the better place for it, but I keep getting this error when adding the predictions and gt:

ValueError: cannot reshape array of size 0 into shape (0,newaxis)

    metric_fn.add(np.array(pred), np.array(gt))
  File "/usr/local/lib/python3.6/dist-packages/mean_average_precision/mean_average_precision.py", line 63, in add
    match_table = compute_match_table(preds_c, gt_c, self.imgs_counter)
  File "/usr/local/lib/python3.6/dist-packages/mean_average_precision/utils.py", line 139, in compute_match_table
    difficult = np.repeat(gt[:, 5], preds.shape[0], axis=0).reshape(preds[:, 5].shape[0], -1).tolist()
ValueError: cannot reshape array of size 0 into shape (0,newaxis)

From the traceback, the issue seems to be happening here:

difficult = np.repeat(gt[:, 5], preds.shape[0], axis=0).reshape(preds[:, 5].shape[0], -1).tolist()

But if perform it manually:

print(pred)
print(gt)
print(np.repeat(gt[:, 5], pred.shape[0], axis=0).reshape(pred[:, 5].shape[0], -1).tolist())

I don't get any error at all:

[[  0.        81.        77.       222.         0.         0.724039]]
[[  0.  83.  72. 184.   0.   0.   0.]]
[[0.0]]
@mjkvaak
Copy link

mjkvaak commented Jan 27, 2021

Update: this bug has been fixed in the latest build: pip install --upgrade git+https://github.com/bes-dev/mean_average_precision.git (follow the instructions in readme). The instructions in the projects PyPI page (https://pypi.org/project/mean-average-precision/) are outdated.


The error message is misleading since this error is related to the num_classes argument of the metrics: for the script to run properly, all classes should be present in the predictions. However, this is certainly a bug: num_classes should rather refer to the classes of the ground truth.

This will fail:

# create metric_fn
metric_fn = MeanAveragePrecision(num_classes=3)#<-see here

# [xmin, ymin, xmax, ymax, class_id, difficult, crowd]
gt = np.array([
    [439, 157, 556, 241, 0, 0, 0],
])

# [xmin, ymin, xmax, ymax, class_id, confidence]
preds = np.array([
    [429, 219, 528, 247, 0, 0.460851],
    [433, 260, 506, 336, 1, 0.269833], #<-see here: only 2 classes present in the preds
])

# add some samples to evaluation
for i in range(10):
    metric_fn.add(preds, gt)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-118-04771a0da08d> in <module>
     15 # add some samples to evaluation
     16 for i in range(10):
---> 17     metric_fn.add(preds, gt)
     18 
     19 # compute PASCAL VOC metric

~/anaconda3/envs/rdaw/lib/python3.7/site-packages/mean_average_precision/mean_average_precision.py in add(self, preds, gt)
     61             if preds.shape[0] > 0:
     62                 preds_c = preds[preds[:, 4] == c]
---> 63                 match_table = compute_match_table(preds_c, gt_c, self.imgs_counter)
     64                 self.match_table[c] = self.match_table[c].append(match_table)
     65         self.imgs_counter = self.imgs_counter + 1

~/anaconda3/envs/rdaw/lib/python3.7/site-packages/mean_average_precision/utils.py in compute_match_table(preds, gt, img_id)
    137     img_ids = [img_id for i in range(preds.shape[0])]
    138     confidence = preds[:, 5].tolist()
--> 139     difficult = np.repeat(gt[:, 5], preds.shape[0], axis=0).reshape(preds[:, 5].shape[0], -1).tolist()
    140     crowd = np.repeat(gt[:, 6], preds.shape[0], axis=0).reshape(preds[:, 5].shape[0], -1).tolist()
    141     match_table = {

ValueError: cannot reshape array of size 0 into shape (0,newaxis)

But this is OK:

# create metric_fn
metric_fn = MeanAveragePrecision(num_classes=2) #<-see here

# [xmin, ymin, xmax, ymax, class_id, difficult, crowd]
gt = np.array([
    [439, 157, 556, 241, 0, 0, 0],
])

# [xmin, ymin, xmax, ymax, class_id, confidence]
preds = np.array([
    [429, 219, 528, 247, 0, 0.460851],
    [433, 260, 506, 336, 1, 0.269833], #<-see here: both 2 classes present in the preds
])

# add some samples to evaluation
for i in range(10):
    metric_fn.add(preds, gt)

@bes-dev
Copy link
Owner

bes-dev commented Feb 1, 2021

Hi, thanks for your feedback.
This issue was fixed in master, but pypi package has not this fix yet. I'll try to rebuild pypi package as soon as I can.

@einareinarsson
Copy link

I still have the problem above while having 0.0.2.1 (installed with pip install --upgrade git+https://github.com/bes-dev/mean_average_precision.git). Truth and prediction label sets do not equals:

truth_label_set
{0, 1, 2, 4, 5, 6, 8, 9, 10, 11}

pred_label_set
{0.0, 1.0, 2.0, 4.0, 5.0, 6.0, 8.0, 9.0, 11.0}

Moreover, there are addtional labels (3, 7) that are not represented in the test set but are in the training set.

Now, shall I set num_classes=10 (according to the length of truth_label_set) or num_classes=11 (according to the length of pred_label_set), or num_classes=12 (according to the length of all possible labels)?

@bonastreyair
Copy link

When will you upload the newest package to pypi?

@bonastreyair
Copy link

it is now live! https://pypi.org/project/mean-average-precision/2021.4.23.0/ Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants