Hard-Positive Prototypical Networks for Few-Shot Classification
Hard-Positive Prototypical Networks for Few-Shot Classification
Blog Article
Prominent prototype-based classification (PbC) approaches, such as Prototypical Networks (ProtoNet), use the average of samples within a class as the class prototype.In these methods which we call Mean-PbC, a discriminant classifier is defined based on the minimum Mahalanobis distance from class prototypes.It is well known that if the data Anti-frizz spray of each class is normally distributed, then the use of Mahalanobis distance leads to an optimal discriminant classifier.
We propose the Hard-Positive Prototypical Networks (HPP-Net), which also employs the Mahalanobis distance, despite assuming the class distribution may be unnormalized.HPP-Net learns class prototypes from hard (near-boundary) samples that are less similar to the class center and have a higher misclassification probability.It also employs a learnable parameter to capture the covariance of samples around the new prototypes.
The valuable finding of this paper is that a more accurate discriminant Ignition Module classifier can be attained by applying the Mahalanobis distance in which the mean is a “hard-positive prototype”, and the covariance is learned via the model.The experimental results on Omniglot, CUB, miniImagenet and CIFAR-100 datasets demonstrate that HPP-Net achieves competitive performance compared to ProtoNet and several other prototype-based few-shot learning (FSL) methods.