Journal of East China Normal University(Natural Science) >
A fast key points matching method for high resolution images of a planar mural
Received date: 2020-06-14
Online published: 2021-11-26
Existing methods of key points matching were invented for grayscale images and are not suitable for high resolution images. Mural images typically have very high resolution, and there may be areas with the same gray textures and different colors. For this special kind of image, this paper proposes a high-speed algorithm of key points matching for high-resolution mural images (NeoKPM for short). NeoKPM has two main innovations: (1) first, the homography matrix of rough registration for the original image is obtained by downsampling the image, which substantially reduces the time required for key points matching; (2) second, a feature descriptor based on gray and color invariants is proposed, which can distinguish different colors of texture with the same gray level, thereby improving the correctness of key points matching. In this paper, the performance of the NeoKPM algorithm is tested on a real mural image library. The experimental results show that on mural images with a resolution of 80 million pixels, the number of correct matching points per pair of images is nearly 100 000 points higher than that of the SIFT (Scale Invariant Feature Transform) algorithm, the processing speed of key points matching is more than 20 times faster than that of the SIFT algorithm, and the average error of dual images based on a single pixel of the image is less than 0.04 pixels.
Xinye ZHANG , Weiqing TONG , Haisheng LI . A fast key points matching method for high resolution images of a planar mural[J]. Journal of East China Normal University(Natural Science), 2021 , 2021(6) : 65 -80 . DOI: 10.3969/j.issn.1000-5641.2021.06.008
1 | 中华人民共和国国家文物局. 古建筑壁画数字化测绘技术规程: WW/T 0082—2017 [S]. 北京: 文物出版社, 2017. |
2 | LOWE D G. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004, 60 (2): 91- 110. |
3 | MATAS J, CHUM O, URBAN M, et al. Robust wide-baseline stereo from maximally stable extremal regions. Image and Vision Computing, 2004, 22 (10): 761- 767. |
4 | BAY H, ESS A, TUYTELAARS T, et al. Speeded-up robust features (SURF). Computer Vision and Image Understanding, 2008, 110 (3): 346- 359. |
5 | RUBLEE E, RABAUD V, KONOLIGE K, et al. ORB: An efficient alternative to SIFT or SURF [C]// 2011 International Conference on Computer Vision. IEEE, 2011: 2564-2571. |
6 | MUKHERJEE D, WU Q M J, WANG G H. A comparative experimental study of image feature detectors and descriptors. Machine Vision and Applications, 2015, 26 (4): 443- 466. |
7 | ABDEL-HAKIM A E, FARAG A A. CSIFT: A SIFT descriptor with color invariant characteristics [C]// 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06). IEEE, 2006: 1978-1983. |
8 | GEUSEBROEK J M, VAN DEN BOOMGAARD R, SMEULDERS A W M, et al. Color invariance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23 (12): 1338- 1350. |
9 | ALCANTARILLA P F, BARTOLI A, DAVISON A J. KAZE features [C]// Computer Vision – ECCV 2012, ECCV 2012, Lecture Notes in Computer Science. Berlin: Springer-Verlag, 2012: 214-227. |
10 | ALCANTARILLA P F, NUEVO J, BARTOLI A. Fast explicit diffusion for accelerated features in nonlinear scale spaces [C]// Proceedings of British Machine Vision Conference. BMVC, 2013: 13.1-13.11. |
11 | CHO Y J, KIM D, SAEED S, et al. Keypoint detection using higher order Laplacian of Gaussian. IEEE Access, 2020, (8): 10416- 10425. |
12 | SAVINOV N, SEKI A, LADICKY L, et al. Quad-networks: Unsupervised learning to rank for interest point detection [C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017: 3929-3937. |
13 | DE TONE D, MALISIEWICZ T, RABINOVICH A. SuperPoint: Self-supervised interest point detection and description [C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2018: 337-349. |
14 | ONO Y, TRULLS E, FUA P, et al. LF-Net: Learning local features from images [C]// 32nd Conference on Neural Information Processing Systems (NIPS 2018). 2018: 6234-6244. |
15 | FISCHLER M A, BOLLES R C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 1981, 24 (6): 381- 395. |
16 | ROUSSEEUW P J. Least median of squares regression. Journal of the American Statistical Association, 1984, 79 (388): 871- 880. |
17 | FOTOUHI M, HEKMATIAN H, KASHANI-NEZHAD M A, et al. SC-RANSAC: Spatial consistency on RANSAC. Multimedia Tools and Applications, 2019, 78 (7): 9429- 9461. |
18 | LIN W Y, WANG F, CHENG M M, et al. CODE: Coherence based decision boundaries for feature correspondence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40 (1): 34- 47. |
19 | CHOU C C, SEO Y W, WANG C C. A two-stage sampling for robust feature matching. Journal of Field Robotics, 2018, 35, 779- 801. |
20 | HARTLEY R, ZISSERMAN A. Multiple View Geometry in Computer Vision [M]. 2nd ed. New York: Cambridge University Press, 2003. |
21 | MOREL J M, YU G S. ASIFT: A new framework for fully affine invariant image comparison [J]. SIAM Journal on Imaging Sciences, 2009, 2(2): 438-469. |
/
〈 |
|
〉 |