Comment in response to APA's draft Standards and Criteria for Approval of Sponsors of Continuing Education for Psychologists.

17 Nov 2014 2:56 PM | SSCP Webmaster (Administrator)
SSCP, ABCT, and Division 12 collaborated on a joint statement for public comment in response to APA's draft Standards and Criteria for Approval of Sponsors of Continuing Education for Psychologists. Please endorse the comment ahttp://apaoutside.apa.org/EducCSS/Public/



Feedback on CE Standards Revisions

 

We thank the task force for their good work in revising the Standards and Criteria for Approval of Sponsors of Continuing Education for Psychologists. We recognize that this is not an easy task, and we were pleased to see numerous changes to the standards that will enhance the emphasis on using science to drive the approval criteria. For instance, we appreciated the references to “content that is fully supported by the most current scientific evidence.” Moreover, we were happy to see the added emphasis on assessing participants’ learning from CE programming (vs. only their satisfaction); e.g., Standard E, Criteria 3. We recognize that this can be hard to implement and applaud the inclusion of this criterion. 

 

We appreciate this opportunity for public comment and would like to propose some additional edits and provide some feedback.  Please see comments below.

 

Also, we wish to let you know the process that led to these comments. We shared your proposed revisions with numerous organizations, including the Society for a Science of Clinical Psychology, Association for Behavioral and Cognitive Therapies, Division 12: Society of Clinical Psychology, along with multiple other divisions, and asked for comments.  Based on the feedback we received and our own discussions, we then created a document outlining a series of points we wished to raise in response to your proposed revisions. This document was then shared with the memberships of the same set of organizations, and a survey was created to ask their agreement with the points we raised and to solicit further comments. Finally, the document was further revised in response to the feedback, and then sent to the Boards of these organizations to check their approval of the final version.

 

Notably, over 250 people completed the survey, and levels of endorsement ranged from 79-91% approval, so the points we raise have received broad support from the memberships of multiple organizations.

 

Please note we would be happy to talk further about any of these points.

 

Thank you again for your work on this very important issue.

 

Bethany Teachman and Mitch Prinstein (current President and President-elect of SSCP), Dean McKay and Jon Abramowitz (current President and President-elect of ABCT), David Tolin (President of SCP-APA Division 12, former APA Office of Continuing Education committee member), Terry Keane (President-elect of SCP-APA Division 12), and Jerry Davison (former APA Office of Continuing Education committee member, Past-President of SCP-APA Division 12 and ABCT, and former Chair of COGDOP)

 

With endorsement from:

Board of Division 12: Society of Clinical Psychology

Board of Division 37: Society for Child and Family Policy and Practice

Board of Division 40: Society for Clinical Neuropsychology

Board of Division 53: Society of Clinical Child and Adolescent Psychology

Tim Wysocki, President of Division 54, Society of Pediatric Psychology

Board of Academy of Psychological Clinical Science

Board of Association for Behavioral and Cognitive Therapies

Board of Society for a Science of Clinical Psychology

 

Issue 1. Concern that (what we perceive to be) problematic programs that are not based on strong science will still qualify for approval.

Current wording. Standard D: Criterion 1.1 states:

“Program content focuses on application of psychological assessment and/or intervention methods that have overall credible empirical support in the contemporary peer reviewed scientific literature beyond those publications and other types of communications devoted primarily to the promotion of the approach.”

Comment: Overall, we like the direction of these changes and recognize that it is a considerable challenge to develop wording that simultaneously sets a high bar for needing strong empirical support for programming while also recognizing that not all “good” programming will naturally fit a typical randomized controlled trial (RCT) model. 

Our goal with these comments is to help ensure that sponsors have strong incentives to offer programming based on the best available science, and to discourage programming that is not grounded in science. Note, we are referring here to programing that is tied to intervention delivery, rather than programming on other topics, such as ethics, legal issues, etc.

One option would be to recognize that there are likely two categories of intervention programs that could be approved: programs that fit traditional RCT models based on DSM diagnoses or defined problem areas, and programs that build on other forms of science, such as basic research that supports principles of change. In both cases, the programming draws from empirical science, but we recognize that there is more than one form of scientific study that can usefully inform intervention training. For those programs that are designed to target specific diagnoses or broad-based psychopathology, or fit the RCT model, one suggestion is to add a criterion indicating that every sponsor is required to have at least 50% of their intervention program offerings within a given review period be based on the strongest available empirical support for that topic. Note, this language intentionally omits the soft qualifying language, such as “overall credible” empirical support, but still recognizes that available empirical support will vary across topics.

To specify what programs would count for this category, an easy guideline is to say that the programming must be on one of the lists of endorsed programs from published lists of treatment guidelines (e.g., the new APA treatment guidelines, the NICE guidelines, Division 12 list, etc.). While we appreciate that using lists of treatment guidelines is an imperfect heuristic, it is one way to ensure strong scientific programming while also keeping the burden low for reviewers and sponsors submitting applications. 

For the other intervention programming submitted for approval, the principles and components underlying the intervention still need to be based on strong empirical science, even though the program/package may not have a strong empirical basis based on RCTs, etc. This would allow for cutting-edge (but still science-based) programs to be approved, so long as the components or principles are well-tested. For those programs that are not derived from treatment guidelines, the sponsor would have to provide some explanation and citations to argue that the intervention being taught is plausible, both logically and in terms of its congruence with established information and foundational assumptions of psychology and other sciences.Similarly, the presenters would need to explicitly disclose the state of the science for the work being presented. 

Note that by recognizing these two categories of intervention programs, it is still explicit that both are based on sound, empirically supported scientific principles, so a program that does not have a strong empirical basis (either for the treatment package, or for its components or principles) would no longer qualify for approval.

We note that many of these same issues also apply to training in assessments, though there are fewer lists of empirically supported assessments to use as a heuristic to guide approval.

Finally, to achieve the goal of providing sponsors with strong incentives to offer programming based on the best available science, we encourage the task force to consider incorporating bonuses/incentives to encourage more science-based programming. For example, one could imagine CE approval application fee reductions based on the proportion of programming that was clearly based on the best available science, or longer approval terms (e.g., for 5 vs. 3 years), or APA could offer free advertising for science-based programming.

 

Issue 2. No mention is made of plan to rescind approval for programming that does not meet the guidelines.

Comment: The standards would be stronger if they included a clear statement indicating that failure to follow the guidelines or inconsistent use of the “most current scientific evidence” will ultimately lead to loss of approval. Although one could argue that loss of approval is implicit in the guidelines, we believe that an explicit statement about this consequence is important to encourage adherence to the guidelines. Note, we believe that adding this potential sanction for non-science based programming would complement the suggestions we made above regarding incentives for sponsors that offer strong science programming.

 

Issue 3. Lack of clarity in criterion wording.

Current wording.Standard D: Criterion 1.3 states:

“Program content focuses on topics related to psychological practice, education, or research other than application of psychological assessment and/or intervention methods that are supported by contemporary scholarship grounded in established research procedures.”

Comment: We found this wording confusing, and wish to propose a more explicit statement that so-called ‘basic science’ content could be approved: “Program content that can inform psychological practice but is derived from empirical research that does not directly target the application of psychological assessment and/or intervention methods, such as so-called basic science from non-clinical areas of psychology (e.g., social and cognitive psychology).”

 

Issue 4. Use of term “accurate” will not lead to useful reporting about the scientific support for the presented material.

Current wording. Standard D: Criterion 2:

“Sponsors are required to ensure that instructors, during each CE presentation, include statements that describe the accuracy and utility of the materials presented, the basis of such statements, the limitations of the content being taught, and the severe and the most common risks.”

Comment: It seems unlikely that anyone would say that what they are reporting is inaccurate. Thus, it seems important to more clearly specify that instructors must report on the actual basis of scientific support for the presented material, including use of citations and reporting a summary of both the empirical support for, and the negative or null results of, the content.

 

Issue 5. Timing of disclosure of conflicts of interest (COIs).

Current wording. Standard G: Criterion 2:

“Sponsors must make clearly evident to all potential participants, prior to registration, any known commercial support for CE programs or instructors. Any other relationships that could be reasonably construed as a conflict of interest also must be disclosed… “

Comment: We suggest requiring disclosure of COIs both when the program is submitted for review and at the time the program begins. In this way, the COIs can be considered by both the CE approval committee and participants. Thus, sponsors would both disclose their COIs when submitting programs for approval, and sponsors would ensure that their presenters report their COIs during CE delivery (i.e., workshops, presentations, etc.).  

 


Powered by Wild Apricot Membership Software