Open Access Open Access  Restricted Access Subscription or Fee Access

Improving Matrix Multiplication Using Parallel Computing


(*) Corresponding author


Authors' affiliations


DOI: https://doi.org/10.15866/irecos.v15i2.20296

Abstract


Multiplication of large matrices requires a lot of computation time as its complexity is O(n3). Because most image processing applications require higher computational throughputs with minimum time, many sequential and parallel algorithms are developed. In this paper, a method of matrix multiplication was chosen, and analyzed. A performance analysis was evaluated, and it was seen that the chosen method was very powerful when dealing with matrices with large sizes and implementing the method using parallel computing based on openMP libraries.
Copyright © 2020 Praise Worthy Prize - All rights reserved.

Keywords


Matrix Multiplication; OpenMP; Parallel Processing; Processing Time; Speedup

Full Text:

PDF


References


Z.A. Alqadi, M. Aqel, I.M. El Emary, Performance Analysis and Evaluation of Parallel Matrix Multiplication Algorithms,, World Applied Sciences Journal, vol. 5 (2): 2008, ISSN 1818-4952.

Z.A. Alqadi, A. Abu-Jazzar, Analysis Of Program Methods Used For Optimizing Matrix Multiplication, Journal of Engineering, vol. 15 n. 1, 2005, pp. 73-78.

Agarwal, R.C., F.G. Gustavson and M. Zuibar, 1994. A high-performance matrix multiplication algorithm on a distributed memory parallel computer using overlapped communication, IBM J. Res. Develop., Volume 38, Number 6.

Agarwal, R.C., S.M. Balle, F.G. Gustavson, M. Joshi and P. Palkar, 1995. A 3-Dimensional Approach to Parallel Matrix Multiplication, IBM J. Res.Develop., Volume 39, Number 5, pp: 1-8, Sept.

Alpatov, P.; Baker, G.; Edwards, H.C.; Gunnels, J.; Morrow, G.; Overfelt, J.; van de Geijn, R., "PLAPACK Parallel Linear Algebra Package Design Overview," Supercomputing, ACM/IEEE 1997 Conference , vol., no., pp.29,29, 15-21 Nov. 1997.

Alpatov, P., G. Baker, C. Edwards, J. Gunnels, G. Morrow, J. Overfelt, Robert Van de GEijn and J. Wu, 1997. Plapack: Parallel Linear Algebra Package, Proceedings of the SIAM Parallel Processing Conference.

Anderson, E., Z. Bai, C. Bischof, J.Demmel, J. Dongarra, J. DuCroz, A. Greenbaum, S. Hammarling, A. McKenney and D. Sorensen, 1990 Lapack: A Portable Linear Algebra Library for High Performance Computers, Proceeding of Supercomputing ’90, IEEE Press, pp: 1-10.

Barnett, M.; Shuler, L.; van de Geijn, R.; Gupta, S.; Payne, D.G.; Watts, J., "Interprocessor collective communication library (InterCom)," Scalable High-Performance Computing Conference, 1994., Proceedings of the , vol., no., pp.357,364, 23-25 May 1994.

Cannon, L.E., 1969. A Cellular Computer to Implement the Kalman Filter Algorithm, Ph.D. Thesis Montana State University.

Chtchelkanova, A., J. Gunnels, G. Morrow, J. Overfelt, R. van de Geijn, 1995. Parallel Implementation of BLAS: General Techniques for Level 3 BLAS, TR-95-40, Department of Computer Sciences, University of Texas, OCT.

Choi, J., J.J. Dongarra and D.W. Walker, Level 3 BLAS for distributed memory concurrent computers, 1992. CNRS-NSF Workshop on Environments and Tools for Parallel Scientific Computing, Saint Hilaire du Touvet, France, Sept. 7-8, Elsevier Sci. Publishers.

Choi, J., J.J. Dongarra and D.W. Walker, 1994. PUMMA: Parallel Universal Matrix Multiplication Algorithms on distributed memory concurrent computers, Concurrency: Practice and Experience, Vol 6(7): 543-570.

Edwards, C., P. Geng, A. Patra and R. Vande Geijn, 1995. Parallel matrix distributions: have we been doing it all wrong?, Tech. Report TR-95-40, Dept. of Computer Sciences, The University of Texas at Austin.

Fox, G., S. Otto and A.Hey, 1987. Matrix Algorithms on a Hypercube I: matrix multiplication, Parallel Computing 3, pp:17-31.


Refbacks

  • There are currently no refbacks.



Please send any question about this web site to info@praiseworthyprize.com
Copyright © 2005-2024 Praise Worthy Prize