Research[1] and literature[2] on concurrency testing and concurrent testing typically focuses on testing software and systems that use concurrent computing. The purpose is, as with most software testing, to understand the behaviour and performance of a software system that uses concurrent computing, particularly assessing the stability of a system or application during normal activity.

Research and study of program concurrency started in the 1950s,[3] with research and study of testing program concurrency appearing in the 1960s.[4] Examples of problems that concurrency testing might expose are incorrect shared memory access and unexpected order sequence of message or thread execution.[5]: 2 [1] Resource contention resolution, scheduling, deadlock avoidance, priority inversion and race conditions are also highlighted.[6]: 745 

Selected history & approaches of testing concurrency

edit

Approaches to concurrency testing may be on a limited unit test level right up to system test level.[7]

Some approaches to research and application of testing program/software concurrency have been:

  • Execute a test once.[8]: 63 
This was considered to be ineffective for testing concurrency in a non-deterministic system and was equivalent to the testing of a sequential non-concurrent program on a system
  • Execution of the same test sequence multiple times.[8]: 63 
Considered likely to find some issues in non-deterministic software execution.
This later became called non-deterministic testing.[9]
  • Deterministic testing.[8]: 63 
This is an approach to set the system into a particular state so that code can be executed in a known order.
An attempt to test synchronisation sequence combinations for a specified input (shared variable access not being corrupted, effectively testing race conditions variables). The sequence is typically derived for non-deterministic test execution.
  • Structural Approaches / Static Analysis
Analysis of code structure and static analysis tools.
An example was a heuristic approach[11]
This led to code checker development, for example jlint.[12] Research and comparison of static analysis and code checkers for concurrency bugs [13]
See also List of tools for static code analysis
  • Multi-user approach
This is an approach to testing program concurrency by looking at multiple user access, either serving different users or tasks simultaneously.[2][6] : 745 

Testing software and system concurrency should not be confused with stress testing, which is usually associated with loading a system beyond its defined limits. Testing of concurrent programs can exhibit problems when a system is performing within its defined limits. Most of the approaches above do not rely on overloading a system. Some literature[6]: 745  states that testing of concurrency is a pre-requisite to stress testing.

Lessons learned from concurrency bug characteristics study

edit

A study in 2008[11] analysed bug databases in a selection of open source software. It was thought to be the first real-world study of concurrency bugs. 105 bugs were classified as concurrency bugs and analysed, split as 31 being deadlock bugs and 74 non-deadlock bugs. The study had several findings, for potential follow-up and investigation:

  • Approximately one-third of the concurrency bugs cause crashes or hanging programs.
  • Most non-deadlock concurrency bugs are atomicity or order violation.
I.e. focusing on atomicity (protected use of shared data) or sequence will potentially find most non-deadlock type bugs.
  • Most concurrency bugs involve 1 or 2 threads.
I.e. Heavy simultaneous users/usage is not the trigger for these bugs. There is a suggestion that pairwise testing may be effective to catch these types of bugs.
  • Over 20% (7/31) deadlock bugs occurred with a single thread.
  • Most deadlock concurrency bugs (30/31) involved only one or two resources.
An implication that pairwise testing from a resource usage perspective could be applied to reveal deadlocks.

See also

edit

References

edit
  1. ^ a b Wang, Chao; Said, Mahmoud; Gupta, Aarti (21–28 May 2011). Coverage guided systematic concurrency testing. ICSE '11 Proceedings of the 33rd International Conference on Software Engineering. Waikiki. pp. 221–230.
  2. ^ a b Dustin, Elfriede (28 December 2002). Effective Software Testing: 50 Ways to Improve Your Software Testing. Addison-Wesley Longman. p. 186. ISBN 0201794292.
  3. ^ Leiner, A.L.; Notz, W.A.; Smith, J.L.; Weinberger, A. (July 1959). "PILOT—A New Multiple Computer System". Journal of the ACM. 6 (3): 313–335. doi:10.1145/320986.320987. S2CID 19867617.
  4. ^ Dijkstra, Edsger W. (May 1968). "The structure of the "THE"-multiprogramming system". Communications of the ACM. 11 (5): 341–346. doi:10.1145/363095.363143. S2CID 2021311.
  5. ^ "Concurrent Software Testing: A Systematic Review" (PDF). Archived from the original on 24 September 2015. Retrieved 4 March 2014.{{cite web}}: CS1 maint: bot: original URL status unknown (link)
  6. ^ a b c Binder, Robert V. (1999). Testing object-oriented systems: models, patterns, and tools. Addison-Wesley Longman. ISBN 0-201-80938-9.
  7. ^ Melo, Silvana Morita; Souza, Simone do Rocio Senger de; Souza, Paulo Sérgio Lopes de; Carver, Jeffrey C. (2017). How to test your concurrent software: an approach for the selection of testing techniques. Conference on Systems, Programming, Languages, and Applications: Software for Humanity - SPLASH.
  8. ^ a b c K.C., Tai (20–22 September 1989). Testing of concurrent software. Proceedings of the Thirteenth Annual International Computer Software & Applications Conference. Orlando, FL, USA, USA. pp. 62–64.
  9. ^ a b Hwang, Gwan-Hwan; Tai, Kuo-Chung; Huang, Ting-Lu (1995). "Reachability Testing: An Approach To Testing Concurrent Software". International Journal of Software Engineering and Knowledge Engineering. 5 (4): 493–510. doi:10.1142/S0218194095000241.
  10. ^ Qi, Xiaofang; Li, Yueran (23–24 November 2018). Parallel Reachability Testing Based on Hadoop MapReduce. th International Conference, SATE 2018. Shenzhen, Guangdong, China. pp. 173–184. doi:10.1007/978-3-030-04272-1_11.
  11. ^ a b Lu, Shan; Park, Soyeon; Seo, Eunsoo; Zhou, Yuanyuan (1–5 March 2008). Learning from mistakes: a comprehensive study on real world concurrency bug characteristics. ASPLOS XIII Proceedings of the 13th international conference on Architectural support for programming languages and operating systems. Seattle, WA, USA. pp. 329–339.
  12. ^ Artho, Cyrille; Biere, Armin (27–28 August 2001). Applying static analysis to large-scale, multi-threaded Java programs. Proceedings 2001 Australian Software Engineering Conference. Canberra, ACT, Australia, Australia. pp. 68–75.
  13. ^ Manzoor, Numan; Munir, Hussan; Moayyed, Misagh (27–30 November 2012). Comparison of Static Analysis Tools for Finding Concurrency Bugs. 2012 IEEE 23rd International Symposium on Software Reliability Engineering Workshops. Dallas, TX, USA. pp. 129–133.

General References

edit