Computational analysis to reduce classification cost keeping high accuracy of the sparser LPSVM

Loading...
Thumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Publisher

International Association of Computer Science and Information Technology

Abstract

Vapnik's quadratic programming (QP)-based support vector machine (SVM) is a state-of-the-art powerful classifier with high accuracy while being sparse. However, moving one step further in the direction of sparsity, Vapnik proposed another SVM using linear programming (LP) for the cost function. This machine, compared with the complex QPbased one, is sparser but poses similar accuracy, which is essential to work on any large dataset. Nevertheless, further sparsity is optimum for computational savings as well as to work with very large and complicated datasets. In this paper, we accelerate the classification speed of Vapnik's sparser LPSVM maintaining optimal complexity and accuracy by applying computational techniques generated from the "unity outward deviation (ζ)"-analysis of kernel computing vectors. Benchmarking shows that the proposed method reduces up to 63% of the classification cost by Vapnik's sparser LPSVM. In spite of this massive classification cost reduction, the proposed algorithm poses classification accuracy quite similar to the stateof- the-art and most powerful machines, for example, Vapnik's QPSVM or Vapnik's LPSVM while being very simple to realize and applicable to any large dataset with high complexity.

Description

Citation

Karim, R., & Kundu, A. K. (2019). Computational analysis to reduce classification cost keeping high accuracy of the sparser LPSVM. International Journal of Machine Learning and Computing, 9(6), 728-733.

Collections

Endorsement

Review

Supplemented By

Referenced By