CST blog

GPDR compliance and general update on CORE/CST work

I sent out an Email today to the 102 people signed up to the CST/CORE Email update list.  This particular Email was triggered by the GDPR legislation and confirmed that I am 100% confident that everyone on the list gave their information to get on the list voluntarily so I/we (i.e. CST) have consent for us to hold this.  That means that the only other thing we needed to do to be fully GDPR compliant was to make available the details about our data processing and privacy policy.  For the lists that data is actually stored by mailchimp and their privacy policy and details of their data handling are at https://mailchimp.com/legal/privacy/.  For any other personal data I/we hold for CST it is handled in line with our policy now up at https://staging.coresystemtrust.org.uk/privacy-policy/.

OK. That’s all the legalistic stuff (and I, and we, do take data protection very seriously even though it ends up reading as very dull stuff), what about CORE?  Well, CORE is thriving really though you wouldn’t know it from that Email list as that was the first time I’d actually sent a message there.  Partly that’s because CORE is now keeping me busy 4-7 days a week with things that always seem to be more important than sending out an update. 

The web site has been a problem as the man who owned the plugin we were using to serve up downloads seems to have disappeared and his plugin is broken.  I have replaced the plugin and now have to rewrite every single one of the 93 download links which is very frustrating and only going slowly. 

Various CORE measures, and all or part of the CORE-A, are still very widely used in the UK despite not being forced on services by the NHS: that has left us true to our philosophy that good routine change measurement is “bottom up”: driven by practitioners rather than “top down” driven managerially or politically.  In other countries, particularly Italy and Spain, it’s also increasingly used within a number of “bottom up” developments that are very exciting and papers and reports that have used instruments and measures continue to emerge far faster than we can keep track of them.  Most of my research time is taken up, as it has been for most of the last decade at least, with translations and psychometric explorations.

Now I really must get into a monthly rhythm of updates on the list and blog posts: there’s a positive spin off from making sure we’re GDPR compliant! If you’ve been caught up as a researcher in GDPR anxieties, I recommend Emily Blackshaw’s blog post: GDPR-related anxiety disorder (academic-type).  She and I have had fun making ourselves 100% sure our YP-CORE work is fully GDPR compliant but it is!

Update on CST work by me (Chris Evans)

Of the three of us, I’m responsible for the web site, most Email that come via the site, and all the translation work. Since our post on 7/7/16 I was very caught up in finishing my clinical post (permanently) at the end of July and then setting off on a trip of a lifetime: trying to cycle from London to Stantiago de Compostela in Spain.  I had to sort out some nasty problems with the web site (WordPress is mostly great but some plugins we have had to use really aren’t) and do a number of pressing bits of translation related work.  On my trip, having time, energy (it’s hard work cycling in the heat for all it’s a wonderful privilege!) and, above all, working IT/wifi/internet has been a nightmare and I have given first priority with that time to the pilgrimage site (www.psyctc.org/pelerinage2016/)  

I am very sorry, but I am simply not going to be able to handle more than tiny CORE related things until my trip is over and I’m back at work: the first week of October. However, one reason for finally stopping clinical work was to make more time for CORE so I hope I will be able to make real inroads into the backlog when I do return.  Apologies for now … but do have a look at the pilgrimage web site as it may amuse you and I do touch on important CORE and healthcare issues. I always hoped to have time to think about them properly like this.  

CST, CORE and the UK “Brexit” vote

All three of us as CST trustees are deeply disappointed in the outcome of the referendum.  As a part of a collective recovery from the vote, it seems important to make a statement about our position.  CORE is tiny in terms of the impacts of “Brexit” on the UK and the EU, and the horrible and uncertain impacts for so many EU nationals currently in the UK and the students here or about to come here.  However, we believe it may help if many tiny things come together to underline that the vote was marginal, that many who voted to leave are regretting it, and that it was NOT just a vote of separatism and petty nationalism but also a vote of dissatisfaction with current political power structures (in the UK and in the EU).

Little and big things are needed to try to minimise the negative impacts of “Brexit” so we wanted to put the following points on record.

  • CORE’s origins are in the UK and UK English versions of the instruments were the prototypes.  However, the CORE project was always inclusive and we always sought to achieve first rate translations provided that translations were culturally appropriate.
  • Full European availability of the CORE instruments was always a CORE aspiration and we are near to having good translations into all official EU languages.
  • Our copyleft strategy and commitment to translations has always been part of that inclusive and internationalist stance.
  • Despite the narrow, divisive and painful “Brexit” vote CST will remain committed to usage and availability within the whole of Europe and more widely.
  • We cannot now lead bids for EU funding but we will continue to support anyone wishing to make such bids and will continue to build links with all EU countries.

Chris Evans (on behalf of CST)

The CORE-OM and the h-index

The CORE-OM was the yield from a 3-year grant from the UK Mental Health Foundation, which started in 1995 and resulted in the launch of the CORE-OM and associated system in 1998. One question with all such developments is: what has been its impact?

As those working in university settings will know, one index of the impact of an individual’s research is their h-index. In 2005, physicist Jorge E. Hirsch developed a simple premise to quantify the scientific output of an individual researcher. The value of h is equal to the number of papers (N) for a researcher that have N or more citations. For example, an h-index of 10 means there are 10 articles that have 10 citations or more. This metric is useful because it discounts the disproportionate weight of highly cited papers or papers that have not yet been cited. It gets increasingly difficult to raise the h-index because the index impacts on all those articles. So, in order for that h-index of 20 to rise to 21, each of those 21 articles now needs to have individual citations of 21 or more.

So let’s suppose we imagine the CORE-OM as a researcher – what would be the h-index for the CORE-OM? Does this give us an idea of one aspect – there are many others – of the impact of the CORE-OM?

Well, like everything, nothing is straightforward. But a simple search on the term ‘CORE-OM’ in SCOPUS yields an h-index of 20 (one article was excluded as not relating to the CORE-OM). So, 20 articles using the CORE-OM have each been cited on SCOPUS at least 20 times.

The top 2 publications, each cited over 200 times, understandably relate to the key initial articles on the development of the CORE-OM and published in the British Journal of Psychiatry (2002) and the Journal of Consulting and Clinical Psychology (2001).

However, some key papers, particularly those early on, did not explicitly use the term ‘CORE-OM’, and some articles might use the CORE-OM in their report but not cite it in the Abstract, which is the usual source for electronic searches to pick up related work. Some early work was not identified, as some journals are not sampled by SCOPUS. However, there was the slightly inexplicable exclusion of an article containing ‘CORE-OM’ in the title!

So a slightly wider search (together with some filtering as more inclusive search terms collect work that is not relevant) yielded an h-index of 23. This search strategy picked up 2 early publications relating to the CORE-OM, both of which appeared in Journal of Mental Health (1998, 2000). However, even this search did not pick up some important articles that used other versions of the CORE-OM, in particular the Short Form A and B versions.

In contrast, a search of ‘CORE-OM’ on Web of Science yielded an h-index of 22 after a couple of papers were deleted as the research did not focus on the CORE-OM. The top 2 articles were the same as in the SCOPUS search. However, WoS does not search Journal of Mental Health, so some of the early work is not detected regardless of extending the search terms, but the search did pick up interesting work using the Short Form versions of the CORE-OM.

So, it would seem that we can say that the CORE-OM has an h-index of 23 according to SCOPUS and 22 for Web of Science. Although the profile of articles for each database is slightly different, the slightly higher h-index for SCOPUS is consistent with the database having a wider scope than WoS – you will likely find the same if you establish the h-index for your own research.

So, what does this mean for the impact of the CORE-OM?

Well, Hirsch suggested that after 20 years of research, an h index of 20 is good, 40 would be outstanding, and 60 exceptional. So, on these simply guidelines, in less than 20 years, the CORE-OM has had a good impact.

All-in-all, I guess that’s not bad. Using the h-index in this way is not as precise as applying it to researchers (they are either an author or not). So the role of the CORE-OM in the articles varies and of course it is only one (imperfect) index of the impact that the CORE-OM has had.

If readers have examples of the impact that the CORE-OM has had for their work or practice, then let us know – we would be very interested to hear from you.

Thank you

 

 

What’s in a name (2): domains, scales, scores, factors & dimensions

The original commissioning specification for the CORE system required that the items in the measure covered domains of wellbeing (or “well being”, or “well-being”: there’s another naming issue!), problems/symptoms, functioning and risk.  The questions were supposed, where possible to include intrapsychic ones and interpersonal ones and functioning was to cover both more personal/intimate and more social functioning and risk should cover intrapunitive and extrapunitive risk, i.e. risk to self and risk to others.  We liked this framework and noted that the first three domains had some links to the phase model of change in therapy which suggests that well-being change comes first, then symptom/problem improvement, then functioning improvement (Howard, Lueger, Maling & Martinovich (1993) A phase model of psychotherapy outcome: causal mediation of change. J Consult Clin Psychol. 61(4):678-85).

We thought the commission specification was right that these were fairly conceptually distinct domains of experience that should be covered by a measure of change in therapy and that was supported by extensive surveys of therapists/practitioners, managers, commissioners (“purchasers” in the jargon of the time) and end users and lay people and we thought we should say which items we saw as belonging most strongly to which domains and offer the opportunity to study scores, and changes in scores, on each domain.  However, we never imagined that these would form clear “factors” or principal components in cross-sectional psychometric studies nor that the chronological relationships between them over time in cohorts or even within a single person in therapy would be neat.  If you feel lousy (low wellbeing) it’s likely that you will have or develop problems and even symptoms, and vice versa.  Similarly, struggling to function well either in personal interactions and/or at work or in caring duties will dent a sense of wellbeing and lead to problems: these simply aren’t independent factors or dimensions. 

With the advantage of hindsight it’s easy to see that we should have been clearer about that.   We tried to use the terms “domains” and “domain scores” in preference to “factors”, “dimensions”, “scales” but slipped from time to time.  We thought we were sufficiently explicit about our use of exploratory factor analysis being exactly that: exploratory, and mainly to check that there was a large main factor and a good collection of smaller factors.  We were unsurprised in our early work (Evans, Connell, Barkham, Margison, McGrath, Mellor-Clark & Audin (2002) Towards a standardised brief outcome measure: psychometric properties and utility of the CORE-OM. British Journal of Psychiatry, 180, 51–60) to find a structure that didn’t reflect the domains but which seemed to some extent to separate positively cued from negatively cued items and to separate the risk items from the other items.  We have never expected that this structure would replicate strongly in different cultures and samples and we only used confirmatory factor analysis to show just how poor the fit to a simple factor structure (Lyne, Barrett, Evans & Barkham (2006) Dimensions of variation on the CORE-OM. British Journal of Clinical Psychology, 45, 185–203).  That paper was intended to be a definitive statement about the expected psychometric structure, at least in British clinical samples.  Here’s the statement from the abstract:

The CORE-OM has a complex factor structure and may be best scored as 2 scales for risk and psychological distress. The distinct measurement of psychological problems and functioning is problematic, partly because many patients receiving out-patient psychological therapies and counselling services function relatively well in comparison with patients receiving general psychiatric services. In addition, a clear distinction between self-report scales for these variables is overshadowed by their common variance with a general factor for psychological distress.

And the end of the discussion:

These considerations with respect to the CORE-OM domains are of importance for future research and scale development, but the utility of CORE-OM has already been demonstrated as a widely used benchmarking measure and reliable indicator of change in psychotherapy research and practice. The scoring method that has proved most
useful in this regard is that in which all 28 non-risk items are scored as one scale and the
risk items as the other. This research confirms that the scale quality of CORE-OM when
scored in this way is satisfactory.

So some suggestions/pleas:

  1. by all means report change on specific domain scores if they are pertinent for the work that went on with the client/patient but don’t imply that the specific scores are well defined factor analytic scales;
  2. the risk and non-risk items are sufficiently distinct in cross-sectional psychometric studies that it may be wise to report the non-risk and risk scores as well as the total scores in almost any study;
  3. if you possibly can, talk about the scores from the CORE-OM and CORE-SF/A and SF/B as “domain scores” not “dimensions” or “factors”.

What’s in a name (1): scoring CORE measures

We may have caused a bit of confusion by introducing the term “Clinical score”.  Perhaps it’s not on the scale of the Capulet/Montague name tragedy (Shakespeare, 1591-1995?) but it may be worth explaining the scoring here as I do see mistakes and do get asked about this.

History

We started out scoring using the mean of the items and recommending pro-rating if not more than 10% of items were missing, i.e. using the mean of the remaining items.  That meant you could get a pro-rated mean overall score for the CORE-OM if as many as three items were missing, for the “non-risk” score if two of the non-risk items were missing, for the function and problems scores if one of their items was missing, and you couldn’t pro-rate if any items were missing for the well-being or risk scores.  You could get overall scores for the CORE-SF/A, CORE-SF/B if one of their items was missing (but not for domain scores as any missing item there means more than 10% of the items are missing).  Similarly, you could use a pro-rated score for the GP-CORE, the LD-CORE, the YP-CORE and the CORE-10 if one item was missing but pro-rating the CORE-5 was clearly impossible. 

All those scores had to lie between 0 and 4 by definition but they could be awkward looking numbers like 0.84 and over the early years we got feedback that many clinicians and managers didn’t like these “less than one and fractional” scores. 

“Clinical Scores”

With mixed feelings in the team, the idea of “Clinical Scores” came in: the item mean as above, but multiplied by 10 to get a score that in clinical samples would pretty much always be a x.y sort of number with x >= 1 and scores ranging between 0 and 40. The same rules about pro-rating were retained.  This “x10 = Clinical Score” gives that rather easy scoring for a complete CORE-10 or complete YP-CORE that the “Clinical Score” is just the sum of the 10 items completed (but if one item is omitted you still have to find the mean of the nine completed items and multiply that by 10).   For a completed  CORE-5 the route to the “Clinical Score” is almost equally easy: the Clinical Score is twice (2x) the sum of the five items’ scores.

We sometimes see people reporting the sum of the items: please don’t do that, we’ve never recommended that anywhere.  We also see people not saying explicitly that they’re using the original “mean item score” or the “Clinical Score”, please do say which you used even if it seems very obvious.  Finally, we encourage people always to be explicit about having used pro-rating (if you have) and to be explicit about numbers of incomplete questionnaires and numbers of items missed. This all maximise comparability of reports.  Non-comparable scoring may not be as lethal as Mantua family feud was to Romeo, Juliet and Mercutio, but it’s definitely to be avoided!

Reference

Shakespeare, W. (1591-1595, exact date uncertain) “Romeo and Juliet” available in many versions as the peer-reviewed format hadn’t been invented: quarto 1, quarto 2, first folio and later versions.  However, the fatal name issue is consistent in all.