Who Tests the Testers?

If test labs are the gateway to gaming for equipment manufacturers, then the quality of service and measurable results that those labs provide is critical to both the integrity and the performance of the industry. There is no doubt that the regulatory authorities that oversee gaming, and the patrons who play at the casino, depend on the accuracy, quality and integrity of those labs. But how are “accuracy,” “quality” and “integrity” defined? How are they measured? More importantly, how are test labs held accountable to those standards? And who decides on the standards in the first place? As more and more cash-strapped jurisdictions rush to give gaming the go-ahead, and as advancements in technology continue to plunge casino floors around the world into networked gaming, these questions are only getting more complex. The real question now may be, are answers even possible?

When Casino Enterprise Management decided to start investigating test lab metrics, we knew we did not want to offer definitive answers to any of the questions we posed, if definitive answers are even possible. Rather, we want to start a discussion of the issues—an open forum where regulators, manufacturers and labs both public and private could share their perspectives, offer insight on their jurisdictions, and maybe even learn from each others’ successes and mistakes.

Starting the discussion in the following pages are former gaming control board members and regulators, state-run labs from major commercial gaming jurisdictions in the United States, and major private labs operating worldwide. Due to the space restraints of this magazine, it is by no means a comprehensive list, though we did our best to include as many voices as possible. Nonetheless, representatives from the Native American gaming industry, international regulators and current policymakers are conspicuously absent. As such, we welcome and encourage you join the discussion in the Gaming Regulators forum online at www.ACEMEnetwork.org. Our opinions may vary and we won’t always agree, but open communication—especially on contentious topics such as this—is the key to the continued success of the gaming industry.

BMM Compliance
Richard Williamson
Senior Vice President Regulatory Services

In your opinion, what constitutes success for a testing lab?
Success for an Independent Testing Laboratory (ITL) is measured a few different ways. The most obvious measure of success is testing accuracy, since that is an ITL’s primary function and the foundation for confidence for the tested product. The certification reports must clearly state the test results and any conditions deemed necessary.

An ITL must meet its customer’s needs, expectations and deadlines, be it regulator, operator or manufacturer. All three customers may have their own unique needs and expectations, but each is looking for consistency and reliability in their commercial engagement with the ITL. Due to the unique nature of each customer’s needs, every engagement is tailored to ensure each party receives the exact service required. The goal is to not shoehorn a customer into a service that does not meet the customer’s needs. This approach is the experience that BMM Compliance customers enjoy. Each customer is seeking and has a right to expect value in the contracted work. When customers have a positive business experience, it too is a measure of success.

Customers seek more from an ITL than just testing. An ITL needs to survey the client landscape and recognize its customers’ needs. BMM is a value-added vendor; we listen to our customers and develop services like our Tribal Training curriculum that helps regulators and operators understand the philosophy behind regulations, internal controls and operating procedures in order to develop a more efficient work environment. Repeated requests for these services are a true measure of success.

What processes and procedures does your lab use to measure and ensure its quality?
Quality for a laboratory starts with having an effective quality system. BMM is accredited to both the ISO 17025 and 17020 standards, which ensure that our laboratories meet the strictest requirements for incorporating effective processes and procedures for areas such as calibration of test equipment, separation of duties, confidentiality for our customers, ethical business practices, accurate and thorough testing methods for all areas of testing, and much more. This accreditation should be what regulators require as part of their suitability requirements for laboratory recognition. Regulators can rely on the expertise of the ISO accreditation system to ensure that the ITL’s they use adhere to the standards by which they earn the certification. BMM was the first independent test laboratory of any kind to receive the ISO 17020 certification.

Do regulators hold your lab accountable for the accuracy of its results?
In North America, there is no formal process for keeping score on products tested by laboratories. The mechanism to enable such a process would require funding for this new element of regulatory oversight. Regulators have the option to investigate and, if necessary, to conduct a hearing to determine why a product is subsequently discovered to be out of scope. The process would be technical and would involve all stakeholders, i.e. the ITL, the manufacturer and the operator. Some regulators do investigate individual cases where a product failure is significant. In these cases, the ITL has to provide documentation that its test scripts were designed to detect the specific failure and, if not, make that correction. The operator has to demonstrate that it installed the product correctly and the manufacturer has to demonstrate that the product operated as designed. Anomalies can also surface when regulations do not require certain tests to be performed, for instance, interoperability testing. The answer is not always clear, which makes these investigations difficult.

Regulatory authorities that adopt technical standards determine policy. But who should decide the effective measurement of quality or fault of a test lab—the lab or the regulator?
The entire legalized gambling mechanism is designed to be regulated by government officials who are charged to administer unbiased oversight of the industry. The ITL, while independent, works by permission of the regulator. The final measure of quality belongs with the regulator. While the ITL provides expert advice to the governing bodies, the regulators would be remiss if they did not seek expert advice from other experts in the gaming field, e.g., manufacturers and operators. The regulator has to weigh all the evidence to reach an unbiased conclusion.

What are/should be the comparative metrics used to measure test labs? What are the right metrics?
One metric from strictly a testing perspective is the percentage of revocations versus approvals. The threshold for a quality baseline is one that is difficult to establish without having current numbers from all laboratories. The other factor is determining the nature of the revocations that have occurred from each ITL. For instance, how many revocations were testing issues that should have been discovered before the product was released versus how many were beyond the scope of laboratory testing. Again, mandated testing regulations or the absence of such guidance from the regulator has an impact on detecting many flaws.

Only one major regulated market (Australia) tracks and measures the work and quality of test labs at a national level. Should other markets consider adopting this model? Would it be feasible?
In Australia, each ITL is held accountable for its performance by the use of a Key Performance Indicator (KPI) system. There are basically four KPIs that participating jurisdictions collect and record:

1. A KPI 1 is a minor problem—it could be a minor error in the machine’s audit screen or on the game’s artwork. No action needs to be taken other than the issue is to be corrected in the next submission.
2. A KPI 2 is a more serious problem—where, for example, the connection to a central monitoring system may not be operating properly and a retrofit is required. The machine, however, does not need to be immediately shut down.
3. A KPI 3 is a critical problem—where, for example, players are being disadvantaged and there is a requirement for an immediate shutdown of the gaming equipment.
4. “Quality” KPI: A jurisdiction may assign an Integrity KPI if it feels the conduct of an evaluation was not carried out in a satisfactory manner. Integrity KPIs are not aggregated into the KPI 1–3 comparison tables; they are meant to be discussed with testers during the ITL’s “face-to–face” meeting with the government Assessment Panel.

BMM Compliance’s performance was well below the average KPIs for all of the competing laboratories and actually had zero KPI 3s. A KPI 3 is equivalent to a revocation here in the U.S. It is through 27 years of experience that BMM strives for excellence.

In the United States, there are more than 300 gaming authorities and a great diversity in the type of gaming permitted in these jurisdictions—Class II, Class III, lottery, charitable, amusement and pari-mutuel. To seek a national oversight is impractical. The most feasible solution would be either a consortium of regulators that permit like products or an independent and unbiased gaming organization. There are tools to effectuate such an oversight, such as Gaming Informatics’ Iris System, which is already used by some jurisdictions to track the status of game software (approved, obsolete or revoked) from each authorized ITL. There is, of course, no prohibition for any one jurisdiction to initiate such a monitoring program.

Which system of testing do you think is best for the industry—state-owned/operated labs or independent labs?
Government agencies certainly would be best served by having their own experts always available. Commissions or TGRAs could use that expertise to provide answers whenever needed. The reality is, however, that government systems are not conducive to meeting rapid changes, typically have budget constraints, and have difficulty obtaining specific expertise on a temporary or as-needed basis.

Do you think it is appropriate for labs to be involved with authoring standards?
It is appropriate, and there are many jurisdictions around the world that agree. BMM Compliance co-authored the Victorian Standard in the early ’80s, which later became the foundation of the Australian National standard. When only Nevada and New Jersey operated legal gaming in the United States, the Australian technical standards were often studied by New Jersey regulators to establish consistent rules for electronic games of chance. Rulemaking is a complicated process and, quite frankly, it is an art form. Many regulators have not had the opportunity to develop this expertise, and the topic of electronic gaming requires subject matter experts. In jurisdictions that author their own rules, the process for rule adoption involves a public comment process that ensures all parties have a say in the rules being adopted. Any ITL that takes on the process of creating rules for a jurisdiction should work with each regulatory authority to ensure that the specific individuals responsible for enforcement of the rules are conversant in them and should encourage a transparent adoption process that permits the industry to comment on any changes to the rules before adoption.

It is important to note that once a jurisdiction adopts rules, regardless of who wrote them, that these rules become the property of the jurisdiction. The regulatory body that adopts the rules is solely responsible for any rule changes that may be suggested thereafter. An ITL may make suggestions for changes to rules, but it cannot legally change rules already adopted. The rule adoption process is a legal mechanism that belongs to the regulators.

Within any regulatory standard, product compliance is often subject to interpretation (e.g., explicit vs. implicit requirements) by the lab, which ultimately leads to variation. In these cases, whose interpretation is “right”?
The only interpretation that any laboratory is bound by is the interpretation of the jurisdiction for which the testing is being performed. The laboratory is not in a position to make its own interpretations as to how a regulation is enforced. Whenever BMM encounters a situation where a standard is ambiguous and a manufacturer is interpreting it one way and we are interpreting it another, the first call is always to the regulator of the jurisdiction to get the only interpretation that matters. How the regulator wants the rule enforced is documented and adhered in any future testing.

Who is/should be ultimately responsible for the quality of the work that labs put out?
At BMM, we take responsibility for our work to the degree that it is warranted by the certification report we issue. We back that statement up by performing all re-testing free of charge for any product that was found to have an issue that was a direct result of BMM’s testing. The laboratory should take responsibility for any product it tests.

Having said that, laboratories are still bound to testing only what is required by the adopted technical standard, and many adopted rules have limitations. If, for instance, the technical standard does not require interoperability testing with Online Accounting Systems, the laboratory has no ability to require that game manufacturers be tested for interoperability, which opens the laboratory up to potential scrutiny if an issue occurs in a casino where there are communication problems between a game and system.

Eclipse Compliance Testing
Janice Farley
Vice President

In your opinion, what constitutes success for a testing lab?
Regulatory compliance testing laboratories like Eclipse Compliance Testing are tasked to ensure the integrity of gaming devices and systems used in gaming establishments. This is a task not to be taken lightly. A successful testing lab provides the industry with reliable testing results—testing results that are accurate as the result of thorough and efficient compliance testing. A measure of success for a compliance testing lab is receiving acknowledgement from manufacturers and regulators alike for assisting these entities with the assurance of their gaming devices’ compliance, essentially facilitating and providing an efficient and cost-effective testing process on behalf of the gaming device manufacturer for the benefit of the regulator.

Another measure of success for a testing lab is customer loyalty and referral business. Eclipse Compliance Testing has been fortunate to have many loyal customers that repeatedly refer new business to our growing test laboratory. Through customer loyalty, business referrals and reliable integrity, a compliance testing laboratory will realize success in the growth of its business.

What processes and procedures does your lab use to measure and ensure its quality?
Eclipse Compliance Testing has recently attained ISO 17025 accreditation from the American Association of Laboratory Accreditation (A2LA). We are pleased to announce that we achieved this monumental accomplishment on our first try. ISO accreditation through A2LA entails an independent audit of our documented testing processes and procedures. This audit ensures that our processes and procedures are in accordance with established industry standards and are executed consistently. An ISO 17025 audit is an assessment of our quality system.

Further to our ISO accreditation audit, we employ staff members who are responsible to ensure that our test scripts are accurate and relevant to the specific product(s) being tested. Additionally, all staff members are subjected to background investigations to ensure that our team is above reproach. Finally, our laboratory is licensed in many gaming jurisdictions. Maintaining these licenses is critical to our company’s future success. Throughout the licensing process, many regulatory agencies visit our facilities and interview our staff to ensure our competency.

Essentially, Eclipse Compliance Testing ensures its quality through documenting its internal processes and procedures to guarantee consistency in testing as well as repeatable and reproducible results. Our quality system assures that our processes and procedures are followed and not just “signed off.” Additionally, our perspective on quality is such that we build quality into our processes and procedures, we don’t just inspect for quality.

Do regulators hold your lab accountable for the accuracy of its results?
Eclipse Compliance Testing is authorized as an Independent Testing Laboratory in hundreds of regulated gaming jurisdictions. Each of the regulatory agencies that we serve relies upon our testing results to ensure compliance of gaming equipment to their jurisdictional requirements. We often speak with regulators to discuss new products and technologies. These regulatory agencies hold us accountable for the quality of our work. Should a regulator ever find an error or omission in our report, we would amend our report to address the error or omission.

Regulatory authorities that adopt technical standards determine policy. But who should decide the effective measurement of quality or fault of a test lab—the lab or the regulator?
Ultimately, it is the responsibility of regulatory authorities to ensure the fairness and integrity of games offered to the general public. Therefore, first and foremost, regulatory authorities must be confident that the technical standards they adopt provide for an effective means of ensuring game fairness and integrity. It is Eclipse Compliance Testing’s view that the process for technical standards adoption should include a dialog between the regulatory authority and at least two independent testing labs, so that multiple viewpoints and varied experiences may be fully explored.

The test laboratory is responsible for the quality of its work product (a compliance report) and for the information contained within that work product. Because a lab cannot fully replicate all the variables of field-level environments, testing in a laboratory environment has certain limitations that preclude its ability to verify all possible scenarios. As such, the ultimate responsibility for performance must lie with the gaming device manufacturer. The test lab should maintain internal metrics and documentation to assure quality and consistency in its testing processes and procedures. Regulators should maintain/develop metrics to assure that the test laboratory is providing them with sufficient information to ensure that products are being tested thoroughly and in accordance with their unique technical standards.

The effective measurement of quality is probably the most difficult aspect for a regulator to assess, as the “work product” is the lab’s compliance report on its findings. It is not like a manufactured product that can be subjected to various forms of testing to determine performance and life expectancy.

As we alluded to, the ultimate compliance of a gaming device rests on the manufacturer that designs, develops and sells the product to the gaming establishment operator. The roll and purpose of the testing laboratory is to ensure that gaming devices comply with the established technical standards. If a gaming device is found not to comply with these standards, then the regulatory agency should decide the effective measure of quality or fault by deciding if the compliance matter should have been reviewed as a normal procedure in the laboratory. Many compliance-related issues arise when gaming devices are subjected to unusual circumstances that cannot be, or are not typically, replicated in the laboratory.

So, if testing performed by the laboratory was sufficient and the matter was outside the scope of lab testing, then the manufacturer should be held accountable for the matter. If the test laboratory omitted its documented testing process or procedure and the matter was overlooked as a result of negligence, then the lab should share accountability with the manufacturer for the matter.

What are/should be the comparative metrics used to measure test labs? What are the right metrics?
There have been recent movements toward requiring ISO 17025 accreditation for authorizing labs by regulatory authorities. Although this is a relatively straightforward method to get at some base metric, it does not necessarily mean that a lab puts out a quality product; rather, it only ensures that there are documented processes and procedures in place that reflect the way in which a lab conducts its testing. It also provides a means to improve a laboratory’s processes. However, it does provide a level of confidence to the regulatory authorities that the lab has an organized quality management system in place. Therefore, ISO 17025 is a reasonably good metric.

Demonstrating a laboratory’s quality is more difficult, as it is ingrained in the operation of the lab itself in terms of the lab’s dedication and understanding of producing the highest quality testing results with a focus on the importance of compliance. Additional considerations include the technical aptitude of the lab’s staff, in addition to the processes and procedures in place that must provide consistency and a means for improvement of those processes. Other considerations for metrics to measure test labs could include the following:

• Frequency or quantity of errors or omissions in compliance reports;
• Industry references from regulatory agencies and gaming device manufacturers;
• Reputation amongst regulatory agencies; and
• The lab’s responsiveness to any issues or matters that may arise.

Do you think it is appropriate for labs to be involved with authoring standards?
Labs can be a valuable resource to regulatory agencies in the development of standards and should be consulted when regulatory authorities prepare standards, but they should not be relied upon to author the standards. Further, when regulatory authorities prepare standards, multiple compliance testing laboratories should be consulted to ensure that the regulatory agency obtains the broadest base of information from the widest range of resources.

Having a compliance testing laboratory author standards is akin to a mouse making cheese. While the compliance testing laboratory will have vast experience with technical standards, as these standards are relied upon for compliance testing, labs should maintain an arm’s length from authoring standards. Maintaining this arm’s length relationship will ensure that the test labs are engaging in the evaluation of gaming devices in a manner that addresses each regulatory agency’s needs.

Within any regulatory standard, product compliance is often subject to interpretation (e.g., explicit vs. implicit requirements) by the lab, which ultimately leads to variation. In these cases, whose interpretation is “right”?
Ultimately, the “right” interpretation is that of the regulator. A lab that renders an interpretation of a standard without first consulting with a regulator may be doing a disservice to its customer (the manufacturer) and the regulatory agency. Regulators should be consulted and informed of vagaries in their standards, so that open discussion can lead to clarification. This may be a time-consuming process, but we have found most regulators open to discussion and quick to provide clarification.

Who is/should be ultimately responsible for the quality of the work that labs put out?
Having a vested interest in a test laboratory, I personally feel responsible for the quality of work that our lab renders. We have empowered our staff to “own” their testing responsibilities and projects. While our staff is empowered to own the quality of their work, I feel ultimately responsible to assure that only the highest quality work product is released to the industry.

This does not mean that a lab should assume any liability associated with a fault in their customer’s product. It is in the interest of the lab to provide confidence to the regulatory agency that the work performed is of the highest quality and that the regulatory agency can rely upon the lab’s findings. Additionally, it is the responsibility of the gaming equipment manufacturer to provide input and assistance to the lab during its product compliance testing to clarify any issues that are uncovered. The cooperative relationship between the lab, manufacturer and regulator will assure confidence that products will perform in accordance with the established standards and guidelines.

Gaming Laboratories International (GLI)
James R. Maida
President

In your opinion, what constitutes success for a testing lab?
A lab is successful when it is respected by all parties it deals with—regulators, suppliers and operators. This respect is gained by never sacrificing independence and by maintaining the highest standards of ethics in business practices.

By always remaining independent from suppliers and other interests, the lab can serve the regulator properly, and the regulator can rest assured that no conflict of interest is influencing test results or opinions. This independence also helps to maintain the strict level of confidentiality that is necessary in the gaming environment. Additionally, to achieve success on both sides (in the lab and in the jurisdiction), regulators must demand the highest quality from the laboratory process. In the past, we have seen some jurisdictions devalue quality of the testing process, which has led to situations where suppliers “shop” their products to labs that may skip tests or perform non-standard tests. This leads to a breakdown in the regulatory environment, which isn’t good for anyone.

Last, a lab is successful in the same way that other businesses are successful—when work is done quickly, efficiently and cost-effectively and when clients and customers are happy. We’re successful when we do testing work that is right the first time, because that helps suppliers get products to market faster and helps regulators continuously gain higher levels of trust and confidence in the work that we do.

What processes and procedures does your lab use to measure and ensure its quality?
Testing labs should not rely on any other previous tests or documentation, reports from other test labs, or reports from the suppliers themselves as evidence of compliance for current testing. These certifications may not be genuine, could be outdated or superseded, or may be based on policy considerations from an unrelated jurisdiction.

Ensuring quality is our top priority, and to accomplish that goal, we have a quality assurance team that operates completely independently from our testing teams. In fact, at GLI, we are proud of the fact that we are the only lab of our kind in the world that has a completely independent quality assurance team that reviews all testing work. This means that all completed testing work is then verified and cross-checked by an independent group to ensure that we hold ourselves to the highest level of accuracy and accountability possible.

Further, we are the only test lab of our kind with a quality manager that interfaces directly with all of our worldwide labs.

In addition, we measure and monitor process cycle time and internal and external defects between offices and within departments. Any variation in this information is then examined to determine the overall level of performance in correlation to the expectations of regulators and suppliers. These are taken into account to determine how well a process is performing.

We also work hard to maintain our ISO accreditations, meaning GLI participates in voluntary accreditation, which involves cooperating with an independent external accreditation body to assess whether we have a compliant quality system in place and technically competent personnel. This adds an additional layer of assurance and accountability.

Do regulators hold your lab accountable for the accuracy of its results?
At GLI, we test for more than 450 jurisdictions and government bodies around the world, and the situation is the same across the globe—regulators do hold the lab accountable and will contact us for clarification should they have concerns or they notice a possible discrepancy. They may ask to see evidence or additional records to support the test results and may request an updated report to include additional information, clarifications or revised information. In addition, for advanced technologies, GLI has frequent communications with our clients, and it’s very important to us that they understand our certification reports.

This accountability begins even before regulators contract with us; it begins with accreditation. There is a big difference between a lab that is “accredited” by an independent investigative body and a lab that is “licensed” by filling out paperwork and submitting a fee.

For example, in Australia, Europe and South Africa, an Accredited Test Facility (ATF) is named based on a regulator’s review of an applicant’s technical ability, probity and background investigation and an assessment as to whether that lab can achieve highly accurate test results. Each year, the ATF is evaluated for its accuracy and capabilities, and GLI has achieved accreditation in those continents. Further, Europe requires test labs to achieve an ISO accreditation, which GLI has also accomplished.

In North America, it is common for regulators to mandate a laboratory capability review before test results are obtained from the labs. Colorado went even further in 2003, when it required an initial technical review of labs to determine whether they could test equipment to a high technical standard. Then they required the labs that passed that test to undergo a probity check to become a “certified lab.”

Regulatory authorities that adopt technical standards determine policy. But who should decide the effective measurement of quality or fault of a test lab—the lab or the regulator?
Regulators and accrediting bodies should determine the quality and measure the effectiveness of the lab. However, the lab should assist this process by having a strict quality assurance program.

The lab should establish an internal process of effectively measuring the quality of its work. This is essential for any type of testing lab, not just those in gaming. The accreditation process ensures that a system is in place to ensure this happens.

Part of being a technically competent organization involves understanding enough about the standards of your customer to measure the quality of your performance. Part of the process is to communicate with the regulators and submitters to assess their needs and make sure we can accommodate them with impartiality and independence. At a high level, the business goal of any testing facility is quite simple: First it is to provide a timely, thorough and accurate test result or evaluation of the test subject’s compliance with regulatory requirements. Second it is to maintain adequate records to support the results or determination associated with the evaluation. In our industry, this could not truly be accomplished without an effective internal measurement of quality; otherwise, it would quickly become very evident.

Testing facilities in our industry are service providers. In a free-market environment, quality associated with service is ultimately decided by the customer. In our business, this is based on consistency in the results delivered.

What are/should be the comparative metrics used to measure test labs? What are the right metrics?
In their RFP processes, regulators tell labs very specifically what they are looking for, and the most common things they look for are:

• Fiscal viability and financial independence
• Ability to meet probity of background investigation
• The ability to meet the expectations of the regulator
• 24-hour support for regulators
• Training and support for on-site inspections
• Specialized departments
• Use of independent testing methods and independent verification
• Adequate office space for the required equipment
• Physical building security, such as alarms, surveillance, etc.
• Secure storage areas
• A documentation management system
• Sufficient testing tools and the ability to independently develop tools if needed
• Effective communication of test results
• Communication protocol testing
• Adequate in-house specialists: mathematicians, mechanical, electrical and software engineering staff; compliance engineering staff; quality assurance staff
• Only performing regulatory testing to remain independent
• Maintaining a statement of independence and establishing a means to remain independent
• Laboratory liability

Only one major regulated market (Australia) tracks and measures the work and quality of test labs at a national level. Should other markets consider adopting this model? Would it be feasible?
Gaming regulation in North America is conducted state-by-state, tribe-by-tribe or province-by-province, and most jurisdictions in North America have a single test laboratory to avoid lab shopping and discrepancies that come from duplication of efforts.

Issues tracked under such systems generally only identify issues that can be observed by a player and reported. The quality of work of a test laboratory must be measured at various stages in the testing process. These measurements are not observed by players who only see external issues—issues that should be rare.

At GLI we track each of our quality issues, both internal and external (those that appear in the field), and we provide a root cause analysis. Each regulator may discuss our findings, which are reviewed by ISO personnel each year. In addition, it should be noted that in Australia there are fewer than seven jurisdictions, which all meet each year in a single room, where in North America there are more than 350 regulators—that would be a big meeting!

Do you think it is appropriate for labs to be involved with authoring standards?
Absolutely. In fact, this is done every day by test labs outside of the gaming industry. For example, UL, CSA and other government, quasi-government and private testing companies write standards every day and get them approved by a variety of domestic and international governmental agencies. In addition, certain elite private testing agencies that are recognized worldwide also maintain their own standards for governments to adopt, either partially or in full. It is important to know that there is a huge difference between writing standards and adopting standards. Regulators direct the writing and adopt the standards that meet their policy and laws. All standards and regulations are subject to public comment, vetting and ultimate approval by a government.

Think of a referee on the football field. He makes an independent call based on the rules of the game. It’s the sports analysts and commentators who interpret the quality of the call.

Testing agencies almost always have some inherent involvement with authoring standards. This helps to ensure the existence of testable requirements. As an expert in the industry, GLI is asked by regulators worldwide to be involved in the process of authoring or developing standards, even if only as a consultant. Testers see issues that could go unnoticed by others. It is important to note that regulators seek a laboratory to assist them in writing standards with a global view so that the consultant can tell them what has worked and not worked around the globe. This is what recently occurred in South America with our involvement in Chile, Peru, Argentina, Columbia and Panama. We also have assisted throughout Europe and Asia. However, we agree it is important the regulation writing is structured in a manner that prevents conflicts of interest and provides stakeholders, the public and legislators and regulators with an adequate review and comment period.

Within any regulatory standard, product compliance is often subject to interpretation (e.g., explicit vs. implicit requirements) by the lab, which ultimately leads to variation. In these cases, whose interpretation is “right”?
The regulator’s interpretation is right and the laboratory testing staff may NEVER interpret anything. It is the standard holder or the writer of the standard that makes all interpretations. Testers test to a standard using pre-approved test scripts. Every answer, every measurement is factual and produces a result. That result is compared against what the regulator expects in a pass/fail manner. That’s why the most sensible solution in this situation is to objectively present the facts to the regulatory authority that adopted or authored the requirement and ask for clarification. We find that suppliers and regulators often appreciate this as means to improve the overall process of managing compliance.

Therefore, there should never be any variance between the reported results between testers. The facts found should always be the same: The interpretation may lead to variations in policy by the regulator but not by the lab. In cases where a laboratory has created a standard (such as in the case of electrical safety testing), it is the laboratory that created the standard that is the authority on interpretations. Where the standard is adopted by a regulator, it is the regulator who has the authority over all interpretations. We see the former many times in our business, where a product is passed by others using the GLI Standard Series using interpretations that are not authorized.

Who is/should be ultimately responsible for the quality of the work that labs put out?
Unless other arrangements are specified, the lab issuing the report assumes responsibility for the quality of testing work specified in their certification. This is standard in any accredited testing facility. This does not mean that they assume responsibility for the operation of the product; they are accountable for their analytical work, results and determinations. In these cases, they are responsible to demonstrate that they performed the testing sufficiently and have adequate records to support the results.

A test report should only be considered a component of ensuring compliance, not the sole factor. Testing is usually coupled with a supporting inspection function or pre-use checks to account for variation between the test environment and the live environment.

Routine maintenance, inspections and audits are commonly put in place to ensure products remain compliant while they are in use. A laboratory is accountable for its results, but it is not expected to account for factors outside of the testing environment. It is also not feasible to expect a laboratory to test every possible configuration of a product, but to make reasonable determinations based on the most likely scenarios, product knowledge and regulatory requirements.

This is why additional measures, such as providing information such as a software authentication signature and other product information, should be listed in a test report. The lab should also provide relevant information in the report to help the regulator understand any basic characteristics and features needed to configure and use the product in a compliant manner.

Gaming Regulatory Consultants
Pat Leen and Tom Nelson
Principals

In your opinion, what constitutes success for a test lab?
There is no single metric that indicates a successful lab. Rejection rates, for example, may say more about the unrealistic testing protocols of the lab and/or the lack of manufacturer quality control than lab proficiency. Similarly, a low average time from submission to approval may suggest lab efficiency but may also be the result of cursory testing. As the final safeguard of gaming device integrity, a lab needs to make all reasonable and prudent efforts to do what it can to prevent defective games from reaching the market. The tests that a lab performs should be sufficiently rigorous so that obvious critical flaws do not escape detection. Even the best lab cannot be expected to identify all possible game problems, but a quality lab should not experience more than a very small percentage of mandatory modifications to approved or certified games.

Does your jurisdiction hold test labs accountable for the accuracy of their results? If so, how? If not, why not?
Based on our experience in developing and operating the Michigan Gaming Control Board test lab, we firmly believe that all labs should be held accountable for the accuracy of their results. As noted, the standard is not perfection, but if a game anomaly or malfunction is discovered in the field, and that flaw should have been detected by a properly implemented test protocol, there should be some consequences. For public labs, this might involve discipline of the responsible personnel. For private labs, a monetary penalty under contractual terms, or a regulatory disciplinary action if the lab is licensed by the agency, would be appropriate.

Do you think it would benefit the gaming industry for test labs to be more transparent in their methodologies and internal workings?
Absolutely. Some lab tests are of the “black box” nature where expected outcomes are matched to actual results, but that doesn’t mean that the entire process should be invisible to the regulatory agency. There is no compromise of confidentiality for the lab to explain in layman’s terms exactly what tests are conducted and why, and what generic quality controls are in place to ensure accuracy of results and game integrity. Particularly in the case of private labs, if they can’t or won’t explain what they do, the question has to be asked—what are they hiding and why?

Regulatory authorities that adopt technical standards determine policy. But who should decide the effective measurement of quality or fault of a test lab—the lab or the regulator?
The proper oversight body must be the regulator; otherwise, where are the checks and balances? Unfortunately, this is closely associated with the above response. For regulators to properly address this oversight responsibility, they must understand what is being tested, how it’s being tested, why it’s being tested and what the significance is of any deviations between expected results and actual results. In other words, they need to understand what the lab is doing and why in order to make meaningful determinations and recommendations about the lab’s results.

What are/should be the comparative metrics used to measure test labs? What are the right metrics?
The quality of a gaming lab is measured by the quantity of submissions, the quality of the testing process, and the verifiable results that those tests produce. In other words, can the gaming lab process a high level of submissions, perform meaningful and quantifiable test protocols, and produce realistic results in a reasonable time? In addition, the regulator must look at how many mistakes the lab has made (i.e., how many flaws that should have been found were missed?).

Only one major regulated market (Australia) tracks and measures the work and quality of test labs at a national level. Should other markets consider adopting this model?
To have truly proper oversight of any gaming lab function it is worth considering, but there would need to be an independent, conflict-free body doing the assessments.

Do you think it is appropriate for labs to be involved with authoring standards?
We believe the role of the lab is to advise and consult, not to be the author, final determinant and the testing facility.

Within any regulatory standard, product compliance is often subject to interpretation (e.g., explicit vs. implicit requirements) by the lab, which ultimately leads to variation. In these cases, whose interpretation is “right”?
We don’t necessarily agree with the premise. Part of the problem lies with imprecision in the standards. In our September 2004 CEM article “Taking a Stand for Standards,” we cited examples of unnecessarily vague standards and noted that “the intent of a standard is definitely not a standard in and of itself. The intent must be further stated in a quantifiable, replicable standard. The criterion for a gaming standard, therefore, is simple. It must be clear, concise, definable and repeatable. In other words, to be a workable standard, it should be able to be duplicated in a laboratory setting using exact measurements with predictable, quantifiable results. Otherwise, the agency is effectively ceding its discretion to the testing lab to use the lab’s own best judgment as to testing protocols. The result may or may not accurately reflect the intent of the agency and could lead to inconsistent results.” But to answer the question, in case of a tie, the regulator wins.

Who is/should be ultimately responsible for the quality of the work that labs put out?
We have also written on this issue, but as a practical matter it is the regulator and not the independent lab that has this responsibility. The regulatory agency may be able to outsource the function, but it can never outsource the responsibility—it needs to understand this reality.

Mississippi Gaming Commission
Emil Lyon
Director – Gaming Laboratory

In your opinion, what constitutes success for a testing lab?
Success in manufacturing is generally considered to be zero defects. So, success in testing should be the same—zero defects.

Do regulators in your jurisdiction hold your lab accountable for the accuracy of its results?
In Mississippi we administer the approval of gaming devices that have been tested by licensed Independent Test Labs (ITLs). We maintain a complete database of approved themes and systems, and those that are no longer approved for our jurisdiction, and are able to react rapidly to issues in the field. However, we try to be proactive. We would certainly hold accountable a test laboratory whose test procedures allowed games with deficiencies to be put into play, but we would hold the manufacturer to a higher level of accountability. We require each manufacturer to submit a certification along with each gaming device or modification, stating that their machine(s) comply with our technical standards.

Regulatory authorities that adopt technical standards determine policy. But who should decide the effective measurement of quality or fault of a test lab—the lab or the regulator?
As regulators, we are the final authority and would make this decision.

What are/should be the comparative metrics used to measure test labs? What are the right metrics?
This is a question that we face every day and one that has no simple answer. Needless to say, a slot machine or other gaming device that fails to comply with our regulations with regard to fairness, security, reliability or auditability will not be permitted. A licensed test lab that recommended approval of a device that did not comply would fall under scrutiny along with the manufacturer of the device. Our goal is zero defects, or 100 percent compliance with our standards.

Which system of testing do you think is best for the industry, state-owned/operated labs or independent labs?
Here in Mississippi we have been using a “hybrid” system where licensed independent test labs are permitted to test, submit reports and make recommendations, but the state lab ultimately decides on the approval of the device. This system has been quite successful for us.

Do you think it is appropriate for labs to be involved with authoring standards?
In Mississippi we welcome input from all sources, including manufacturers, casino operators, labs from other jurisdictions and ITLs. However, we continue to act in the best interests of our state and our patrons and will continue to author our own regulations and technical standards.

Within any regulatory standard, product compliance is often subject to interpretation (e.g., explicit vs. implicit requirements) by the lab, which ultimately leads to variation. In these cases, whose interpretation is “right”?
The Mississippi Gaming Commission is the interpreter of regulatory compliance in our jurisdiction. We maintain continuity of compliance through our approval process.

Who is/should be ultimately responsible for the quality of the work that labs put out?
Responsibility for the quality of the work (reports and recommendations) produced by the labs is theirs alone. However, responsibility for the quality of the products still lies with the manufacturer.

Nevada State Gaming Control Board
Travis Foley
Technology Chief

In your opinion, what constitutes success for a testing lab?
In Nevada, success for the lab means providing effective investigations and timely approvals in conformance with applicable statutes and regulations. Success for the lab also means maintaining the highest level of integrity while promoting positive growth, competition and stability of the gaming industry in Nevada.

What processes and procedures does your lab use to measure and ensure its quality?
To provide effective investigations, the lab utilizes a multi-tiered process for evaluating test processes and test results. This includes peer reviews, security procedures, supervisory reviews and independent audits of documented internal controls. Test procedures constantly evolve in accordance with new technology.
To provide timely approvals the lab performs targeted testing. The initial submission of a new gaming device platform goes through our most in-depth and robust set of tests. For subsequent testing we make intelligent risk-based decisions throughout the submission and testing process, which allows for timely approvals while maintaining the integrity of the gaming device.

Do regulators in your jurisdiction hold your lab accountable for the accuracy of its results?
In Nevada, the lab is part of the regulatory agency. This is due in part to the importance of the lab’s mission and the state’s ability to directly control and hold the lab accountable for the accuracy of its results.

Regulatory authorities that adopt technical standards determine policy. But who should decide the effective measurement of quality or fault of a test lab—the lab or the regulator?
The responsibility of determining the effectiveness of a test lab falls both on the lab and the regulator. For an independent or private lab, the success of the business is dependent on its ability to meet the quality expectations of the regulators it is working for. For regulators, it is vital that they have a technical understanding of the private lab processes and procedures and that they implement their own procedures to verify the quality and consistency of the test results or recommendations submitted.

Which system of testing do you think is best for the industry—state-owned/operated labs or independent labs?
This is a question that can only be answered for each jurisdiction. Each jurisdiction has its own requirements and needs, and in some cases these requirements and needs justify a state-run lab. In other cases the jurisdiction may not be able to justify the time and expense of establishing and maintaining a state-run lab.

The industry would benefit most from a core set of standards that are consistent across all jurisdictions and labs. This is not to suggest that there should be one standard for every jurisdiction, since each jurisdiction has its own laws and regulatory requirements. However, the majority of standards in all jurisdictions, while similar, can be interpreted differently. With consistency in interpreting standards, the manufacturers could design once, test once, and deliver the product to the customers faster. Both state labs and jurisdictions utilizing independent labs can and should work together to apply consistent standards where possible.

Do you think it is appropriate for labs to be involved with authoring standards?
Authoring technical standards will be most effective if it is an open process. While the final decision to adopt standards should fall on the shoulders of policymakers, the authoring of the standard should involve the most knowledgeable individuals from the public, labs, operators and the manufacturers. This will ensure the highest level of integrity and relevance in the requirements while balancing their implications on the public and the industry as a whole.

Within any regulatory standard, product compliance is often subject to interpretation (e.g., explicit vs. implicit requirements) by the lab, which ultimately leads to variation. In these cases, whose interpretation is “right”?
The more important question is what should be done to minimize the occurrence of such situations and how to avoid interpretation conflict in the future. Communication is key in these situations. The conflict can be internal to a lab, between separate labs for the same jurisdiction, or between jurisdictions. Labs have to proactively educate their staff with the explicit and implicit requirements of the standards they enforce. Labs have to constantly provide documented clarification of standards when it becomes apparent that there is a common misunderstanding in the industry, and finally, labs both private and public must work together to provide common understanding where standards are the same or similar. This is not an easy task, but without improved communications labs will continue to have a negative impact on the manufacturers’ ability to provide a consistent product to the operator and the patron.

Who is/should be ultimately responsible for the quality of the work that labs put out?
The continued growth and success of gaming is dependent upon public confidence and trust that gaming devices operate honestly. Public confidence and trust can only be maintained by strict regulation of the manufacture of gaming devices and the methods in which they operate. The regulator is tasked with this responsibility and therefore is ultimately responsible for the work product of the lab.

New Jersey Division of Gaming Enforcement
Eric Weiss
Lab Administrator

In your opinion, what constitutes success for a testing lab?
A successful slot lab is able to approve—or, if necessary, reject—submissions in an expeditious and efficient manner while ensuring that products are thoroughly evaluated according to applicable regulatory standards. In addition, the submission and testing processes should not be so costly, time-consuming or burdensome as to make the process itself part of the calculation on the part of a manufacturer or casino whether or not to pursue the approval of a product within a jurisdiction.

What processes and procedures does your lab use to measure and ensure its quality?
Two key factors in our effort to continuously improve the slot lab’s performance are data-driven assessment and communication. For all of our submissions, and for broader categories of the lab’s overall activity, we have established performance targets, which are broken down for our engineering and math units. In addition, to ensure that we are being as responsive to the needs of the industry as possible, we have implemented surveys and other less formal means to obtain helpful feedback on our operations. By measuring our output, establishing challenging but workable benchmarks, and listening to critical feedback from the industry, our lab is constantly reviewing its processes and trying to improve its performance.

Over the past year we have undertaken a variety of initiatives to increase productivity. For example, we cross-trained engineers on multiple platforms to provide greater flexibility in the assignment of submissions. In addition, we have encouraged manufacturers to set priorities for their submissions so that a product that has customers waiting can be jumped ahead in the testing queue. We have also modified our internal electronic tracking system so that we can now better evaluate the productivity of our personnel. Another of our innovations is the creation of an online tracking system that allows manufacturers to electronically monitor their submissions and to print Division of Gaming Enforcement and Casino Control Commission approval letters on demand. This allows manufacturers to track the progress of their submissions in real time and greatly reduces the amount of time manufacturers and regulators spend discussing the status of a submission.

Communication is always a critical element in our effort to improve. In addition to soliciting input from the industry, we have instituted an open-door policy with our engineers, field personnel and legal staff, making our entire bureau highly accessible to casinos and manufacturers. We have personnel deployed in every casino as needed to expedite the inspection and installation process. We have also created an online application that allows casinos and manufacturers to identify slot machine software that is revoked or scheduled to be revoked within the next three months. The application identifies the revoked software, as well as the casino, and floor location of the affected slot machines.

As a result of our initiatives and improved communication with the industry, we have reduced the number of days that the lab needs to complete its review of a package from more than 50 days in 2007 to our current 2009 average of approximately 37 days. It should be noted that during this time period, despite the lab’s faster turnaround time, our rate of rejections/manufacturer withdrawals has actually increased.

Do regulators in your jurisdiction hold your lab accountable for the accuracy of its results?
New Jersey’s casino regulatory authority is vested in two agencies, the Division of Gaming Enforcement (DGE), which encompasses the slot lab, and the Casino Control Commission (CCC), which consists of members appointed by the governor. The CCC promulgates regulations, issues licenses and approves internal controls. The DGE prosecutes regulatory violations, investigates all license applications, and tests all slot machines and other electronic products to the standards enunciated by the CCC. The slot lab, which I head, issues reports of our testing results to the CCC, which approves or rejects such products for use in New Jersey casinos. The CCC does not have regulatory authority over the DGE, but it sets the standards to which the lab must test, and it has the power to reject our recommendations. In addition, the DGE is supervised by the Attorney General and is subject to her authority. Finally, the DGE is subject to legislative oversight by the New Jersey Senate and General Assembly.

Regulatory authorities that adopt technical standards determine policy. But who should decide the effective measurement of quality or fault of a test lab—the lab or the regulator?
In the New Jersey model, the lab and the regulator are joined within the Division of Gaming Enforcement, so there is no distinction between the two. Of course, the DGE strives to meet somewhat competing demands—the need to fulfill all of its statutory mandates and the industry’s interest in having innovative products introduced into the market as rapidly as possible. Although those interests sometimes compete, we take the view that the integrity of the market is always our primary concern, and we will act to protect that interest above all others. As a practical matter, our lab tries to accommodate industry needs whenever possible, and we are receptive to any recommendation for improving our operation. We find that by maintaining an open-door policy, we are able to quickly resolve issues in good faith, including those regarding the quality of our work, before they turn into more major problems.

What are/should be the comparative metrics used to measure test labs? What are the right metrics?
I do not believe that there is any single metric that can be used to measure the quality of a test lab. As a general, overriding goal, we seek to have a process that is sufficiently efficient and cost effective so that manufacturers will submit their newest products and technology to our jurisdiction for use in our casinos prior to, or contemporaneously with, other large casino markets.

Only one major regulated market (Australia) tracks and measures the work and quality of test labs at a national level. Should other markets consider adopting this model? Would it be feasible?
I do not believe it is feasible to promulgate standards for the accreditation of state-run slot laboratories. I think the quality of a state lab’s work is judged by the slot manufacturers and casinos active in its jurisdiction. If a state lab is inefficient or the quality of its work poor, manufacturers will refuse to submit products to it, and casinos in the jurisdiction will complain to the appropriate oversight authority. In addition, the gaming press is a vital tool in the process of evaluating state-run slot labs. Indeed, customers who cannot play the latest games in a particular jurisdiction will use the forum provided by gaming publications to voice their dissatisfaction with the lack of new games. In addition, manufacturers and casinos are not shy about detailing the issues they may have with a state-run lab, whether in response to feedback from their customers or based on their own view of their treatment by regulators. The process, while not formalized, is effective, as it quickly becomes apparent which labs and regulators the manufacturers and the casinos believe are doing a good job and which are not.

Which system of testing do you think is best for the industry—state-owned/operated labs or independent labs?
It is difficult for us to make a comparison because New Jersey has used only one type of system. New Jersey has had a state-operated lab since it passed legislation authorizing casinos in 1977. We can say, however, that the New Jersey state-operated lab system helped propel Atlantic City to the forefront of gaming in this country and has worked well for more than 30 years.

Our state-operated slot lab provides expert knowledge about issues specific to our jurisdiction. In addition to ensuring the integrity of the games and slot systems in use in every casino in our market, the slot lab’s expertise is frequently used to assist with regulatory investigations within the DGE and for criminal investigations by the New Jersey State Police and prosecutions by the Division of Criminal Justice. We have also provided assistance to various local, state, federal and international jurisdictions on a variety of gaming-related issues. Regulators throughout the world, as well as manufacturers and casinos in other jurisdictions, seek input from our lab and look to it as a model to be emulated.

Do you think it is appropriate for labs to be involved with authoring standards?
I think it is appropriate for a lab to have input into the standards but not necessarily to have final say or control over establishing the standards. In New Jersey, the CCC sets the standards for testing. It receives input from the DGE, including slot lab personnel. But it also considers the positions of its own staff, as well as casinos and slot manufacturers, before finalizing any regulation. In addition, any party is free to petition the CCC for a new regulation. Thus, the DGE’s lab has a voice in this process but does not have the final word on what any particular standard should be. With this model, there are appropriate checks and balances to make sure that the standards put in place take into account the interests of all relevant stakeholders, not just the party performing the testing.

Within any regulatory standard, product compliance is often subject to interpretation (e.g., explicit vs. implicit requirements) by the lab, which ultimately leads to variation. In these cases, whose interpretation is “right”?
The CCC is the ultimate decision maker in New Jersey. As a practical matter, the DGE’s engineers and legal staff are always willing to work with any manufacturer that has an issue with a particular regulation. Where appropriate, they will consider either interpreting an existing regulation favorably or seeking the promulgation of a new regulation. We believe the current system is effective.

Who is/should be ultimately responsible for the quality of the work that labs put out?
In the New Jersey model, as the head of the slot lab, I am initially responsible for the quality of the work of the lab and, ultimately, the director of the DGE is responsible for the quality of our work. I can speak on behalf of the director when I say that we take our responsibility very seriously and we are always willing to discuss any issue that a casino or manufacturer may have regarding the quality of the slot lab’s work.

Pennsylvania Gaming Control Board
Michael Cruz
Director – Bureau of Gaming Laboratory Operations

In your opinion, what constitutes success for a test lab?
Simply put, I believe that success for a test lab is achieved when a product passes all of the regulatory and compliance testing in the lab, then operates as expected once installed in the field.

Does your jurisdiction hold test labs accountable for the accuracy of their results?
In Pennsylvania the test lab is operated by the regulatory agency, so if our results are not accurate or flawed, products are able to reach the gaming floor, and we are directly responsible.

Do you think it would benefit the gaming industry for test labs to be more transparent in their methodologies and internal workings?
I think transparency is good to a certain extent. A manufacturer of a slot machine or system must know without ambiguity what its product is expected to do in the respective jurisdiction. I also feel that 100 percent transparency cannot be achieved, because if the manufacturer knows every single test a lab performs, then the possibility remains that a product can be designed to pass all tests while purposely masking a flaw.

Regulatory authorities that adopt technical standards determine policy. But who should decide the effective measurement of quality or fault of a test lab—the lab or the regulator?
In Pennsylvania, the regulator is the tester. Not only do we test the slot machines and systems, but we also promulgate the regulations that these slot machines and systems must adhere to. In this case there is no confusion or ambiguity as to what is asked of the slot machine or system manufacturer in Pennsylvania.

What are/should be the comparative metrics used to measure test labs? What are the right metrics?
I do not think there are comparable metrics between public and private labs.

Do you think it is appropriate for labs to be involved with authoring standards?
I cannot answer for the private lab model, but in Pennsylvania I believe one of the reasons we have been so successful is that since the regulators are the testers, we are able to work directly with manufacturers regarding issues that are specific to our jurisdiction without having to go through an intermediary.

Within any regulatory standard, product compliance is often subject to interpretation (e.g., explicit vs. implicit requirements) by the lab, which ultimately leads to variation. In these cases, whose interpretation is “right”?
In Pennsylvania there are no such variations, as all regulations and technical standards are developed by the same staff that will be conducting the testing.

Who is/should be ultimately responsible for the quality of the work that labs put out?
As I stated previously, the lab is ultimately responsible for the quality of work in Pennsylvania since it is the author of the standards as well as the tester that upholds the products to that standard.

SIQ
Zoran Svetik
Director – Testing and Measuring Technologies Division

In your opinion, what constitutes success for a test lab?
First and foremost, it is the trust of the supervisory bodies (regulators) in its integrity, technical competence and performance, which takes many years to build. Although laboratories perform testing on behalf of the regulators, it is equally important to win the confidence of manufacturers whose products the lab tests. When manufacturers recognize quality-of-service attributes besides technical excellence, such as short turn-around times, meaningful reporting, high-quality feedback information, competitive pricing and one-stop testing (e.g., offering electrical safety, EMC and similar tests simultaneously with gaming tests), a test lab can consider itself successful.

Does your jurisdiction hold test labs accountable for the accuracy of their results?
Although SIQ tests for many jurisdictions, we refer to Slovenia, where we are established, as “its” jurisdiction. We are required by law to analyze and explain any problems, bugs, non-compliances or other issues that would emerge in subsequent use of an approved model. There is no provision stating what would happen in case a flaw in testing was demonstrated, but if damage was caused, the laboratory would be responsible for it and some kind of settlement would need to be found or a lawsuit would follow. That is why indemnity insurance is one of the requirements for the laboratory. A general question with many jurisdictions is, however, whether the regulators have sufficient technical resources first to find the flaw and then to prove it to the laboratory.

Do you think it would benefit the gaming industry for test labs to be more transparent in their methodologies and internal workings?
Labs nowadays are all ISO/IEC 17025 accredited, which ensures a basic transparency in their methodologies and internal workings. However, the differences in test results are often the consequence of the interpretations of requirements, which never follow evolving technology fast enough. Measures that would contribute to the transparency and dissolve misinterpretations to some degree are:

• clearly defined competency criteria for test labs;
• verification of the actual testing methodologies by the regulators in addition to the accreditation body;
• providing for interpretation discussion between regulators and testing laboratories;
• clearly prescribing the contents and the information to be provided in a test report to show exactly which tests were performed and their outcomes; and
• monitoring the performance of laboratories.

Some regulators apply some or all of the above approaches, but this also requires considerable resources on their side. The greatest danger lies in different approaches to testing by different labs, which could eventually downgrade the testing by all and, in the end, make the testing meaningless.

Regulatory authorities that adopt technical standards determine policy. But who should decide the effective measurement of quality or fault of a test lab—the lab or the regulator?
The lab cannot objectively assess itself. The regulators, on the other hand, in many cases do not have adequate resources to do this effectively. The ISO/IEC 17025 accreditation does assure a certain level of quality and should prevent faults. The same standard also mandates inter-laboratory comparisons (an exercise where participating labs perform testing on the same sample and another lab compares and evaluates the results) to be carried out by the labs, which, in practice, does not happen sufficiently due to unwanted IP disclosure concerns, a small number of participating labs, and a lack of adequate technical standards. Besides a more rigorous inter-comparison regime, which accreditation bodies are already trying to pursue, the measure that regulators could take as described earlier would greatly contribute to objective evidence of the lab’s performance quality.

What are/should be the comparative metrics used to measure test labs? What are the right metrics?
The quality of gaming tests can mostly be evaluated based on the correctness and completeness of the test reports by addressing questions such as “Were all the required tests performed?” and “Were all the requirement interpretations in line with the regulator’s stand point?” In this regard, the body issuing the type approvals for certain jurisdiction is the one that can really evaluate the quality of a given test lab.
In Slovenia, the certification body issues type approval certificates, and by reviewing test reports in detail, its competent team discovers many mistakes, misinterpretations and omissions and provides meaningful feedback to the labs originating the test reports.

In the long term, other information is gathered from the operators and regulators monitoring the performance of technologies when deviations are observed. This information, again, typically ends up with the body issuing the type approvals. This body has the means to establish metrics to compare the labs among each other, albeit only for its own jurisdiction(s).

Only one major regulated market (Australia) tracks and measures the work and quality of test labs at a national level. Should other markets consider adopting this model?
Definitely yes, this would be beneficial to all parties involved. However, as mentioned earlier, this would mean considerable resources on the side of regulators and/or different regulators working together.

Do you think it is appropriate for labs to be involved with authoring standards?
Yes, by all means. All involved parties should contribute to the standards—regulators, manufacturers, operators and test labs. Any of these parties can greatly improve the quality and usability of technical standards by contributing their unique experience and insight. Naturally, the manufacturers, operators and test labs can only act as consultants or reviewers, and the regulators should always remain independent decision makers. The technical standards always support a certain policy that only the regulator knows exactly, and variations reflect different approaches to controlling the industry and the impact on society, which are all political decisions.

Within any regulatory standard, product compliance is often subject to interpretation (e.g., explicit vs. implicit requirements) by the lab, which ultimately leads to variation. In these cases, whose interpretation is “right”?
Without doubt, there is just one “right” interpretation, and this is the regulator’s interpretation. All other interpretations are mere opinions. The problem is that with some regulators, it is difficult to get unambiguous interpretations, or to get interpretations at all. In such cases, the lab is forced to decide on its own. This is certainly bad for the market and puts under question the whole reason for having a requirement with unclear interpretations.

Who is/should be ultimately responsible for the quality of the work that labs put out?
The responsibility always lies with the test lab. However, the labs will always maintain the quality level that is needed or expected under the regulatory and accreditation requirements. Therefore, while not assuming any responsibility, the accreditation and the regulatory bodies also indirectly affect the quality of work performed in the test labs.

Leave a Comment