search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
EDUCATION :: SOFTWARE


Software systems—trust but verify T


By Anne L. Tate, MT(ASCP), SC, MBA


he laboratory is exploding with multiple soft- ware solutions that connect disparate systems across the enterprise. These software solutions create and transfer clinical information to different end points and support pathways for clinical decision logic that can be harnessed into key analytical data. The data generated from these lab software systems is fueling the growing need for analytics and action- able informatics to support insight into the health- care continuum. We know that lab software systems contain valuable raw descriptive data that can be transformed to support both diagnostic and predic- tive analytics as the data moves up and down the information chain.


That means now more than ever, we need to know that laboratory results and the digital logic used to support result delivery is accurate and reliable. With more and more laboratory results being managed by digital systems, how do we know that the logic behind those systems is correct? What is the lab relying on to ensure that the software that calculates, manipulates, and auto-verifies laboratory diagnostic information is, in fact, accurate? How can it be assured that the clinical data generated by these software systems does not cause patient harm?


The answer to these questions is that we must trust but verify these software solutions when they are implemented, when there are updates, and when any changes occur. Epner, Gans and Garber, in their article


within laboratory software systems causing insidi- ous issues. They can manifest themselves as a miss- ing diagnostic comment, a critical value that never triggers a pop-up notification, or a test code that is omitted from an interface. These issues can directly impact patient care by what was not reported or not analyzed, just as much as a mechanical instrument or device error that impacts a patient result. Lab IT projects are stacking up. Continuous verifi- cation testing is now the norm vs. the exception. IT and LIS teams are now tasked with perpetual software system verification projects to keep software current from the vendor and clinically relevant based on the laboratory best practices. All of this takes time and resources to ensure that these various lab software systems are fully verified before they are placed into production.


Best practices


The following are general best practices for soft- ware verification testing for laboratory software. Refer to CLSI Auto-08 (Managing and Validating Laboratory Information Systems)2


verification of Clinical Laboratory Test Results)3


and Auto-10 (Auto- for


validation and verification documentation templates. t Maintain a test system. A separate test system or


area for verifying new software or proposed changes will enable you to maintain your production integrity and minimize any downtime for all software verifica- tion activities. The test system should be synced up with the configuration of the production system to ensure that what is being verified in the test system is the same as in the production


system. t Separate your testing into dry and wet testing. Determine your software system require- ments for testing to verify your data input con- figurations. Can these configuration parameters be tested within the test system or do the con- figuration variables require clinical samples to imitate the behavior of the production system? Evaluation of your workflows and information transitions via a two-way diagram will provide you the information to determine what can be tested in a closed environment or should be tested across systems using simulated or clini-


“When Diagnostic Testing Leads to Harm: A New Out- comes Approach for Laboratory Medicine,”1


indicate


that one of the five causes of patient harm related to analytical testing is result inaccuracy. We know that result accuracy can be impacted by mechanical issues like instrument calibration but perhaps less conspicu- ous and not as easily detected are the errors that come from improperly configured data inputs, software rules syntax issues, and missing reference ranges on an EMR patient report. These omissions or errors lurk


26 MAY 2019 MLO-ONLINE.COM


cal specimens. t Determine if cross testing is warranted. If data needs to cross systems, then the validation plan should include the point of origin of the data and how it flows to the receiving system. The system inputs should be identified up front so that the output can be determined according to your configuration. For example, if a new software system is being imple- mented, all test codes and profiles should be exer- cised across the software systems regarding order and result retrieval. A full spectrum of results should be generated that represent the clinical values expected


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52