Author: Elizabeth Lemmon

Course Round Up: The Whys and Hows of applying to the Public Benefit and Privacy Panel for Health and Social Care (PBPP)

Date of course: Wednesday 11 March 2020 Organised by: Wellcome Trust Clinical Research Facility Post summary: In this post I provide a run through of the course: The Whys and Hows of applying to the Public Benefit and Privacy Panel for Health and Social Care (PBPP). As the title suggests, the course – delivered by PBPP Manager Dr Marian Aldhous - covered two main areas: Why would you need to apply to the PBPP and how would you go about doing this. My thanks go to Marian, who has kindly let me use her slides to write this post. In a rush? Check skip to the Top Tips for filling in your application and some of my reflections on the course (where you will also find links to an example Tooth fairy PBPP and associated documents!).

Post Contents: 

  1. What is the PBPP?
  2. What is the legislation and principles covering aspects of information governance for the use of NHS Scotland data for purposes other than direct care?
  3. What is the remit of PBPP?
  4. When do you  need a PBPP application?
  5. How does the PBPP application process work?
  6. How long is your PBPP application going to take?
  7. How to fill in your PBPP application according to the 5 Safes
  8. Top Tips for filling in your PBPP application
  9.  Group discussion and reflection on the concerns raised
  10. Final thoughts
  11. Useful definitions

1. What is the PBPP?

PBPP is a combination of a patient privacy panel and an information governance panel. They were set up by the Scottish Government eHealth to provide a single, consistent, open and transparent scrutiny process for health data to be used for different purposes, including research. They exist to ensure the right balance between safeguarding the privacy of people in Scotland and the duty of Scottish public bodies to make the best use of data. PBPP provide leadership in the complex privacy and information governance domains so that:
  • Scottish people gain the benefits from the use of data
  • Emerging information risks are managed
  • Public concerns around privacy are addressed
  • Protection of privacy in the public interest is promoted
They have a scrutiny role on behalf of patients with respect to the information you are going to find out about the patient, in work that is not related to their direct care and information not in the public domain. They seek to check if the use of the data is justified, reasonable and will it achieve its purpose. Further, they want to scrutinise how damaging it would be if the information was leaked. They are there to ensure that applicants have considered the public benefits and privacy implications for participants and their data. Moreover, they are there to provide assurance of the ‘technical and organisational arrangements’ to ensure respect for the data minimisation principle (GDPR Article 89(1)). What was really clear from Marian’s presentation on the role of PBPP was that they are not there to trip applicants up or to prevent work from going ahead.

Back to contents.

2. What is the legislation and principles covering aspects of information governance for the use of NHS Scotland data for purposes other than direct care?

The UK Data Protection Act 2018 applies when processing (that basically means using or storing) personal data for living individuals, this includes pseudononymous data. For personal data For the lawful processing of personal data we look to Article 6(1) of the GDPR which states that the processing of personal data is lawful only if and to the extent that at least one of the following apply: a) The subject has consented b) Performance of contract c) Compliance with legal obligation (under specific legislation) d) Protection of vital interests i.e. to save someone’s life e) Performance of a task that is in the public interest f) Legitimate interests of controller Point (e) is the most common legal basis used for the processing of personal data given in PBPP. Note that there are very good reasons why the others are NOT used. Specifically, consent for taking part in research, under the Research Governance Framework, is different from consent obtained for processing data under GDPR. This is one of the reasons you are NOT encouraged to use consent as their legal basis under 6.1. or 9.2. Also, legitimate interests can only be used by non-public authority / sector bodies (commercial or charities). So, 6.1(e) is the most common because it is the most appropriate for the tasks usually covered by PBPP applications. For sensitive personal data For the lawful processing of special category sensitive data, we look at Article 9 of the GDPR: (1) Processing of personal data revealing: racial or ethnic origin, political opinions , religious or philosophical beliefs , or trade union membership , and the processing of genetic data, biometric data, data concerning health (physical and mental) or data concerning natural person’s sex life or sexual orientation shall be prohibited. (2) Paragraph 1 shall not apply if one of the following apply: a) Subject has given explicit consent b) Necessary for obligations and rights of controller /subject for employment or social security c) Necessary for vital interests of subject d) Legitimate activity of non for profit body for political, philosophical, religious or trade union aim e) Data made public by the subject f) Necessary for legal claims or judicial capacity of courts g) Substantial public interest h) Preventative or occupational health, assessment of working capacity of employee, medical diagnosis, provision of health and social care i) Public interest in public health j) Necessary for archiving in public interest, scientific or historical research purposes or statistical purposes in accordance with article 89(1). (Article 89(1): subject to appropriate safeguards for the rights and freedoms of the data subject.) The most appropriate basis chosen depends on the purpose of the application. If your application is for the use of health data, it would usually be covered by one of 9.2(h), 9.2(i) or 9.2(j), as these are the bases linked to health. For applications looking at NHS/medical processes (e.g. audits, health care planning or service improvement) then 9.2(h) would be used. For public health or infection control, you would most likely use 9.2(i). For any research, 9.2(j) should be used. If you are ever in doubt about this, you can always talk to your eDRIS coordinator to get advice. The Common Law Duty of Confidentiality also applies to personal data that are not already in the public domain, for example patients have shared personal medical information with their GP and they expect it to be kept confidential. The Caldicott Principles and Data Protection Principles outline the special circumstances under which this information can be shared.

Back to contents.

3. What is the remit of PBPP?

The PBPP replaces the Privacy Advisory Committee (which covered research), National Caldicott Scrutiny Panel (which covered both research and non-research), and CHI Advisory Group (which also covered research and non-research). PBPP have the authority to scrutinise applications for the use of NHS Scotland controlled data and National Records of Scotland controlled NHS Central Registry data for research, healthcare service planning and improvement, audit and other well defined and bona fide purposes. This scrutiny covers the whole process from patient to data provision/analysis. In 2017/19, around 53% of applications to PBPP were from academic researchers.

Back to contents.

4. When do you need a PBPP application?

An application to PBPP is mandatory for:
    • Any use of sensitive or identifiable NHS Scotland data other than for direct care
    • Use and linkage of NHS Scotland National Services Scotland ‘national’ datasets
    • Use of NHS Scotland data from multiple boards
    • Linkage with external (non NHS Scotland) data
    • Linkage to primary research data
    • Access to individuals’ clinical data without consent
    • For transfer of NHS data out with Scotland
An application is optional for:
    • Any other use of NHSS data considered sensitive, novel or complex, or with wider national implications
    • Use of data from primary care providers, and/or from beyond NHS, but with implications for the service
An application is not required for:
    • Use of PII from only one NHS Board (Caldicott Guardian approval), unless requires linkage using national datasets
    • Use of data from your own board for direct care
    • Clinical research where covered by other Information
    • Governance processes

Back to contents.

5. How does the PBPP application process work?

There is a single PBPP form for all applicants. Detailed guidance is also given to fill in the form (this is covered in the second part of this post). Entry to PBPP goes through the Electronic Data Research and Innovation Service (eDRIS). The eDRIS team provide advice to applicants on the data sets and variables that are available. They also advise on the capability of that data to meet the objectives of the applicants proposal. Further, they provide help to fill in the PBPP form itself. They also work closely with the PBPP team when helping applicants prepare their applications. The eDRIS team work on the provision of data from different sources and organise access to the Safe Haven and carry out disclosure checks. Finally they offer support for data analysis. Clearly, a very busy team that cover a wide range of areas! The diagram below outlines these roles: Note as well that there are two PBPP’s- a health one (or health and social care PBPP) and a stats one. All Non-NHSS (External) data go to the stats PBPP (S-PBPP). This includes ScotXEd education data, NRS census data (which takes a minimum of 6 months for data after S-PBPP approval), social care data, HMRC and DWP data (though possible in theory, you are unlikely to be able to obtain this but that’s another story…). There tends to be longer time frames involved for getting approval for external data sets. So, the whole process (or the eDRIS sandwich) looks like: I found this diagram really helpful in providing a picture of how the scrutiny process works. All applications go to Tier 1. Around 5 applications are scrutinised every fortnight (in 2017/18, the panel saw 136 applications). They are assessed according to a proportionate governance traffic light system relating to the criteria set out in the PBPP application. Those assessed as Green are all OK at Tier 1 and are approved or approved with some conditions e.g. ethical approval to be obtained. Sometimes the will require clarification of minor points/changes to the form which would then be checked by the PBPP manager and approved. Those that are Amber (medium risk) may need further clarification from applicants. Those responses will need to be reviewed by the same people who reviewed the application at the panel meeting; this happens by email and the panel does not meet again.  Those that are classed as Red have issues that cannot be tolerated, they are referred to Tier 2, with or without clarification. Applications can also be referred for a re-submission due to too many major changes being needed. Amendments can also be made after approval but this should be the exception. Any amendment must be within the original scope of approval. They can be made for things like change of institution, addition of variables, changes to storage location/mechanisms etc. Amendment forms are available on the PBPP website and must be submitted via your eDRIS coordinator.

Back to contents.

6. How long is your PBPP application going to take?

This is the question we all really want to know the answer to, especially when we are planning projects with limited funding. The timing can be split up into three puzzle pieces: Pre-PBPP submission This stage of the process is mainly down to you (at least once you have been allocated an eDRIS coordinator). The time taken in this stage depends on the number of iterations needed in your application, so making sure you have been thorough and clear when first filling it in will help. It will also be influenced by the complexity and clarity of the project- you’ve got to be incredibly clear and concise when outlining your research plans. Top-Tip: use diagrams where you can! PBPP submission to PBPP approval This part of the process is mostly very well defined and evidence is available on these timings. The figure below shows data from the 2017/18 PBPP annual report. Clocked days is the number of working days the application is being processed by the PBPP. The time for applicants to respond to any queries regarding the application is not included in clocked days. The ‘total’ number of working days from submission until the final decision is made, includes any time spent back with the applicant. The Tier 1 panel meet every fortnight and see 5 applications. The timing for PBPP scrutiny and review is dependent on the number of iterations the application needs to go through and the speed of panel members responding. The complexity and clarity of the proposal are also important factors which could affect the time to approval. Tier 1 is faster than Tier 2 (they meet less often and by definition your application will have already been through Tier 1 processes). Post-PBPP approval This appears to be the most uncertain part as it depends on so many factors. These include, the waiting list for an eDRIS analyst, if you are requesting data from different sources. The timing is also affected by the overall complexity of the project, the amount of data required and the requirement for data sharing agreements.

Back to contents.

7. How to fill in your application according to the 5 Safe Principles

So, we know that the PBPP are there to weigh up the public benefit versus the privacy risk of applications. They carry out this assessment by considering the Five Safe Principles which coincidentally correspond to sections in the application: When you are filling in your application you must demonstrate how you meet the 5 Safe Principles. In what follows, I outline the main questions that PBPP ask you to answer in your application. Some of them overlap somewhat and they should not be treated as a complete check list (every project is different!), but they will help to ensure you demonstrate the 5 Safes.

Safe People

The PBPP will be looking for:
  • Who has access to the data?
  • Who needs to know? Caldicott Principle 1!
  • How responsible are the applicants/analysts?
    • What is their knowledge and experience?
    • What training do they have?
      • IG training is required for an application (applicants, PHD supervisors, clinical leads, data custodians and anyone who is accessing patient level data (including pseudonymised data) needs to have up to data IG training)
      • Links to possible courses are on the PBPP website
      • Training must be renewed every 3 years
    • Who is responsible to ensure the applicants do what they say? Accountability principle!

Safe Organisations

The PBPP will be looking for:
  • Which organisation is responsible for the data?
    • Which organisation is the data controller? Affects main contact, which DPO should be consulted, purpose of the proposal
    • Responsible for the data
    • Researchers with NHS / University contracts
    • Who will keep the researchers accountable?
    • Does this change at different points in your proposal?
  • How safe is each organisation?
    • Is it a known public organisation / charity /company?
    • Who will become Data controller?
    • Is there a Data processor involved?
    • Data processing agreement in place?

Safe Projects

The PBPP will be looking for:
  • Is this an appropriate use of the data?
  • Project information
    • Background / Aims & objectives / Methods / Outcomes
      • Be very clear in your description and objectives.
      • Write so that a non-expert can understand.
      • Write about the whole process- from patient to data analysis.
    • Is the use of data necessary? Can it be done another way?
      • Be clear about variables requested
      • Bear in mind the principles of data minimisation
      • Justify the need for every single variable
    • Is the project ethical?
    • Where will the data go? Who will access it? Top Tip: Use flow diagrams! This can really help you to see what agreements will be needed, between which organisations.
    • What is the population for which data requested?
    • Would they expect their data to be used for this purpose?
    • How will the processing take place?
    • Is the processing lawful, fair and transparent?
      • You MUST state the legal basis for processing data. GDPR Article 6(1) for personal data (including pseudonymised data) and GDPR Article 9(2) for special category data.
    • How will the rights of the subjects be upheld?
  • What is the public benefit?
  • Has the applicant carried out any public engagement? (may not apply to all applications)
    • Have lay people been involved in the project design? If not, why not?
    • Do the public see the benefit in the project you wish to do?
    • Would they feel that the types of data requested are reasonable?
  • Has any peer review of the proposal been carried out?
  • Has there been a review from ethics?
    • NHS REC opinion
    • University ethics committee
  • Has the applicant assessed the privacy risks?
    • Have they carried out a Data Protection Impact Assessment? Note that this can be a legal requirement, depending on the nature of the processing. If not, why not? (It’s good practice to do this and a lot of it overlaps with the content required in the PBPP).
  • Other approvals
    • If you are a data processor, you will need a Data Processing Agreement setting out the processing instructions.
    • Approvals from out with Scotland
    • Approvals from another Data Controller for linkage to non-health data.

Safe Data

The PBPP will be looking for:
  • How identifiable are the data?
    • Are identifiers used for processing only? Make this clear!
    • Do combinations of variables make individuals identifiable e.g. rare diseases in small populations?
    • Are the data anonymised or pseudonymised?
  • Are the data highly sensitive?
  • Are you adhering to the principles of data minimisation?
    • Are the data relevant?
      • Too much data? Are all variables necessary? Can you use partial or derived variables?
      • Too little data? Will they fulfil the aims?
    • Justification for requesting these data variables
    • Are all the details necessary e.g. full dates, full postcodes?
  • What will happen to the data at end of project?
  • What are the sources of data requested?
    • For new data
      • How is it being collected?
      • Who is the data controller?
    • For existing datasets
      • Who are the data controllers?
      • If not NHSS do you have permission?
    • Who is carrying out the cohort identification and/or data linkage and how? Should be by third party.
  • How do individuals know about the use of their data?
  • What would individuals expect you to do with their data?
    • Participant information leaflets
    • Privacy notices on NHS Board websites
    • Generic NHS leaflets/website links

Safe Settings

  • From where will the data be accessed?
    • Will it be accessed in a Safe Haven? This is what NHS Scotland prefers!
    • If not in Safe Haven, why not? Consider:
      • How secure is the data collection process?
      • How secure is the transfer of data?
    • Will the data be accessed securely (data protection principle 6)?
      • Will it be accessed remotely?
      • Can anyone see over your shoulder?
      • Will the data be pseudonymised?
      • How will access be monitored?
    • Will the data be transferred securely?
    • Will the data be stored securely?
      • For how long?
      • Will it be destroyed? If so how?

Safe Outputs

  • What will be the outputs of the analysis?
    • Disclosure control. Beware small numbers! Groups < 5-10
  • Who will do disclosure control?
  • How aggregated is the data?
  • How identifiable is the data within the outputs?
  • Is there any confidentiality risk from publication?
  • What will happen to the data at the end of the analysis and at the end of the project?

Back to contents.

8. Top Tips for filling in your PBPP

DO
  • Read the latest version of the guidance notes on the PBPP website
  • Use lay language and be concise
  • Use diagrams and flow charts
  • Take advice from your eDRIS coordinator. They know a lot about the data and its capability in meeting your project objectives!
  • Take care while filling in the form- carelessness raises questions of care taken elsewhere
  • Read and answer the questions asked
  • Be consistent across different questions
  • Explain ALL abbreviations and technical terms
  • ‘Tartanise ’ your application
  • Be aware that different legislation applies in Scotland and England
  • Set realistic end dates
  • Clearly label your supporting documents to match what you put into the PBPP form
  • Look at this very handy Tooth fairy PBPP application and corresponding data dictionary of variables, along with an example DPIA and privacy notice.  They have been put together by PBPP Manager Dr Marian Aldhous so you can see what a successful application looks like. Note that this is just ONE example and every application is different!
DON’T
  • Don’t just copy and paste from other documents. They may not ask the same questions and they may have mistakes
  • Don’t copy from the guidance and include the note that says you shouldn’t use this...
  • Don’t assume the panel knows about your proposal, your area of research or your local processes. All needs to be explained clearly
  • Don’t forget that behind each data variable there is a patient, who might be interested in your results.

Back to contents.

9. Group discussions and reflection on the concerns raised

The general feeling in the room was that the course was very helpful. However, there were concerns raised by some participants. One concern was around ethics and knowing what ethics is required. It seemed some were confused as to what ethical approval they required and they felt they were filling in a lot of forms. I disagreed with this, as an academic who has worked with administrative health data, the ethics side of things was actually the more straightforward part. But I’d be keen on hear others views on this. It’s no surprise that another concern was on timing, but clearly timing depends on so many factors which are highly individualised to specific projects. On timings, we have those three pieces of the puzzle: writing your application to submission; submission to approval; approval to data access. The middle piece is very clear, at least for the majority of projects, and timings are published in the PBPP annual reports. The other two depend on many external factors. What can we do to influence them? Puzzle Piece 1: Writing your application. I’d strongly suggest taking this course or reading this blog post (hey if you’ve read this far, you’re already part way there!). If you’ve done the background work thoroughly and you write a good application, it won’t need to go through as many iterations with your eDRIS coordinator and you will save yourself some time and make the lives of eDRIS easier. PBPP Panel Manager Dr Marian Aldhous has put together a very handy Tooth fairy PBPP application and corresponding data dictionary of variables, along with an example DPIA and privacy notice, so you can see what a successful application looks like. Note that this is just ONE example and every application is different! Puzzle Piece 2: From application submission to approval.  We've got this one covered. See the Section 6: How long is your PBPP application going to take?  Puzzle Piece 3: From approval to data access. This is the tricky piece and the timing at this stage will vary hugely from project to project. At least, that’s what I assume. But the truth is, we don’t really know. So what can we do? This is one of the reasons I set up eCRUSADers, to try and build up an understanding of the time it will take to get access to data. But realistically I doubt every PBPP applicant is about to come forward and share their experiences with us. One suggestion might be to publish data at the point of data access which outlines clearly the data sets/variables requested and the time timelines for the three parts of the puzzle. This could take the form of simply the PBPP application or just a table filled in with those timings. Alternatively, end of project reports could be made available which detail this information. Once we know the timing from approval to data access, as well as the factors which might influence them e.g. what data sets are requested, how many years, etc, we would be better equipped to plan for research projects which have limited timelines.

Back to contents.

10. Final thoughts

Overall, The Whys and Hows of Applying to the Public Benefit Privacy Panel for Health and Social Care is a very useful course and I’d recommend you get a space on it if you are thinking about using Scotland’s administrative health data. It will take you half a day but it could save you much more time in the long run. I’d maybe even go further and say that it should be compulsory…. The PBPP is not there to trip you up, it’s there to ensure the balance of public benefit and privacy risk. They are on our side and just as keen to make the processes easier and quicker as we are. Timing remains our biggest challenge and there are bits and pieces we can do to speed things up. Having said that, the biggest timing challenge we face is from PBPP approval to data access. Unfortunately, there is little we can do to influence this and that has to change.

Back to contents.

11. Useful definitions

Anonymous data Anonymous data are not able to identify any individual in the data. Removal of identifiers does not necessarily make the data anonymous. In anonymous data, no combination of variables would allow an individual to be directly or indirectly identified. Anonymous data is irreversible. It is not subject to the Data Protection Act 2018. Data Controller Controllers are the main decision-makers – they exercise overall control over the purposes and means of the processing of personal data. If two or more controllers jointly determine the purposes and means of the processing of the same personal data, they are joint controllers. However, they are not joint controllers if they are processing the same data for different purposes. Controllers shoulder the highest level of compliance responsibility – you must comply with, and demonstrate compliance with, all the data protection principles as well as the other GDPR requirements. You are also responsible for the compliance of your processor(s). (from the Information Commissioner’s Office website) Data Processor Processors act on behalf of, and only on the instructions of, the relevant controller. Processors do not have the same obligations as controllers under the GDPR and do not have to pay a data protection fee. However, if you are a processor, you do have a number of direct obligations of your own under the GDPR. (from the Information Commissioner’s Office website) Data Protection Data protection is concerned with the safe use of personal data. The UK Data Protection Act 2018, which incorporates the EU General Data Protection Regulations (GDPR) outlines the data protection principles that organisations, businesses and the government must follow when using personal data. Personal data Any information which either alone, or combined with any other data leads to the identification of individual(s). This could be a name or phone number, IP address or cookie identifier. Pseudononymous data Pseudonomymous data are data that have been altered so that no direct identification of any individual can occur. However, additional information is held by you or someone else that allows the identification of an individual. This is personal data and is subject to the Data Protection Act 2018. Special category personal data Personal data which are subject to more scrutiny when determining the lawful processing. They include things like race, ethnicity, medical conditions (physical and mental), sexual life, religion, philosophical beliefs, politics and trade union memberships, criminal convictions/alleged offences, genetic and biometric data. (from the Information Commissioner’s Office website)

Back to contents.

Using Administrative Data in a Clinical Trial

In this post, Catriona Keerie, Senior Statistician within Edinburgh Clinical Trials Unit (ECTU) talks to us about her work within ECTU and her involvement on a rare Scottish trial that used administrative health data. She provides some great diagrams to help along the way, which I can tell you are essential if you want to understand the complicated structure of the data! Catriona also highlights some of the key challenges the team faced in terms of data access and use and offers her reflections on what they learned from the project which could help other trials like this one in the future.

Can you tell us a little about your role in ECTU? 

My role involves a variety of tasks – however, primarily my role is the statistical reporting of trials run from within ECTU. I typically have up to eight active trials throughout the year. My role varies on these – I am Trial Statistician for approximately half of them, and the ‘reporting’ statistician for the other half. When I have my reporting statistician hat on, I’m responsible for the statistical programming and generating the analysis and results.

How trials have you worked on that have involved using administrative data? 

Since I joined ECTU in 2014, I have worked on three trials using administrative data. Two of them used solely routine healthcare data and the third one is running currently, based on a blend of routine data plus data captured within the trial.

Is the use of administrative data in trials becoming more common over time?

The use of administrative data in the trials setting is definitely becoming more common since clinical trials are known to be expensive and time-consuming. The use of administrative healthcare data is viewed as a more efficient means of understanding the health of the population using readily available data. However, there is a trade-off in terms of the quality of the data being captured.

What was the High-STEACS trial?

High- Sensitivity Troponin in the Evaluation of patients with suspected Acute Coronary Syndrome (High-STEACS) was a step wedge, cluster- randomised control trial. In plain English this means… It’s a relatively recent study design that’s increasingly being used to evaluate service delivery type interventions. The design involves crossover of clusters (usually hospitals or other healthcare settings) from control (standard care) to an alternative intervention until all the clusters are exposed to the intervention. This differs to traditional parallel studies where only half of the clusters will receive the intervention and the other half will receive the control. This diagram helps to demonstrate the difference in designs: The population of interest were patients presenting in hospital with heart attack symptoms. The trial sought to test a new high-sensitivity cardiac troponin assay against the standard care contemporary assay. Specifically, to test if the new assay could detect heart attacks earlier and with a more accurate diagnosis.

How were patients enrolled into the trial and how does this differ from a standard trial?

Step wedge trials usually randomise at a cluster (hospital) level, rather than randomising patients individually, so this was the main difference to a standard trial. So patients were enrolled rather than randomised into the trial. Standard trials require patient consent before randomisation, but in this context, individual patient consent was not needed due to the randomisation being performed at hospital level. Appropriate approvals for consent were sought through the hospitals. If patients presenting with heart attack symptoms at any of the hospitals were eligible for the trial (based on our pre-specified inclusion/exclusion criteria), then we had permission (at hospital level) to include them in the study and use their securely anonymised data.

How many patients were enrolled into the trial?

Approximately 48,000 patients were enrolled from 10 hospital sites in NHS Lothian (3 sites) and NHS Greater Glasgow and Clyde (7 sites), over a period of just under three years.

Which administrative data sets were used?

We used a total of 12 distinct data sources which were a combination of general administrative datasets and datasets more specific to our area of research from locally held electronic health care records. Prescribing data was obtained from the Prescribing Information System, also ECG data, plus general patient demographics. Trial-specific outcome data was obtained from the Scottish Morbidity Record (SMR01) and also from the register of deaths (National Records of Scotland). All data were captured separately for each Health Board – there is currently no amalgamated data source which holds all data. Health Boards are the owners of their own data. The main linking mechanism for these 12 data sources was the patient CHI (Community Health Index) number. To ensure patient anonymity, CHI numbers were securely encrypted prior to use.

How did you get approval for these data sets? How long did this approvals process take?

Approvals were required at a number of levels. We required ethics approval, approval to use patient data without consent and Health and Social Care approval (through the Privacy Approvals Committee, predecessor to the Public Benefit Privacy Panel). There were also health board specific approvals required for local data to be released. In addition, we required data supplier approval. Finally, approval was needed for the data to be hosted on the Safe Haven platform. This process was long! This was ongoing throughout the duration of the trial. Although the data was being captured automatically via routine records, the final dataset wasn’t confirmed until relatively late on in the process due to complexities of mapping locally held healthcare records. One of the advantages of the national datasets is that they are the same across all health boards.

Where were the data sets stored?

Datasets from NHS Lothian and NHS GG&C were supplied separately in their own Safe Havens. The combined dataset was hosted on the NHS Lothian Safe haven space on the National Safe Haven analysis platform .

How did the linkage of the data sets happen?

The data sources from both health boards were combined and hosted on the National Safe Haven analysis platform. This wasn’t a straightforward process. Although we’d anticipated capturing exactly the same patient data across both health boards, the reality was quite different. Data were captured in different formats with different variable names and different definitions. So there was an unexpected element of data cleaning required before the data could effectively be merged into one large analysis dataset. The final linkage was done using the securely encrypted CHI number for each patient.

What do you see as the major benefits of using administrative data in this setting?

Use of administrative data in this context is a more efficient process – less resource spent on the administrative aspects of trial enrolment e.g. capturing demographic details such as age, sex, postcode or medical history. Using administrative data also gave us the opportunity to research a large representative patient population in comparison to the setting of an RCT where a strict pre-specified population, not necessarily representative of the target population, are studied.

Overall, what were the major challenges of the study?

From the data side of things, ensuring the correct data was extracted was difficult. The diagram above is very over-simplified view of what happened! The reality of picking up the required variables from two separate health boards which capture data very differently was difficult. Another challenging aspect was ensuring that a patient wasn’t enrolled more than once in the study. Patients can present in any hospital with heart attack symptoms more than once, so we needed to ensure they weren’t included in the study each time they came to hospital. This required a de-duplication algorithm using encrypted and de-identified patient data. However, I think the biggest challenge was for those in the team tasked with obtaining the correct approvals. It was underestimated how complex this would be. While approval for the national datasets was straightforward and the eDRIS team were very helpful, processes for locally held data at the time of trial set up were not established. Legislation around patient data confidentiality was continually changing, so we were faced with keeping abreast of new legislation as time progressed. The safe haven networks are now more established and hopefully, the processes are more straight forward.

Is there anything you would do differently next time?

I think the data validation aspect of the trial is crucial. Ideally we would have had more time spent on this in order to ensure the data was as correct as possible. Involving the clinical team much sooner in this process would have helped - they have a really important role to play in terms of ensuring the data picked up makes sense from a clinical perspective. For High-STEACS, the access to the data was highly restricted and did not include the clinical team. Many of the data discrepancies were only picked up at the final review stage once data and results had been released out of the Safe Haven area. Working within the Safe Haven environment creates time lags on both sides of the process – data being imported into the Safe Haven and also results exported out at the end take time. We hadn’t considered this time lag when working to tight timelines.

Do you know if anyone is using the learning from this trial for future trials of this kind?

The High-STEACS trial was directly followed by the HiSTORIC trial, addressing similar research questions and using many of the same data sources. So we have been through the loop again which has made for a more streamlined process. Other trials within ECTU are also making use of the learnings from High-STEACS, particularly from the governance and approvals side of things.

eCRUSADers Summary


Thanks for sharing this with us Catriona! It is great to see that administrative data are being utilised alongside clinical trials in Scotland. It is also interesting to hear that despite being part of a trials unit like ECTU, the High-STEACS team still faced many of the same challenges that we eCRUSADers have experienced when using administrative data for research. In particular, we can relate to the issues of permissions, timing and working within the Safe Haven environment. Overall, it seems that the timing issues were due to the use of the locally held data rather than using the national data.

Researcher Experience: Dr David Henderson

It's a new year and this week we hear from a new researcher, namely, Dr David Henderson. David is a Research Fellow at Edinburgh Napier University and Scottish Centre for Administrative Data Research (SCADR). He is no new face to the eCRUSADers scene and has built up a wealth of knowledge and expertise in the administrative data sets he has worked with over the last four years. In particular, David has worked closely with the Scottish Social Care Survey (SCS), both at local (Renfrewshire Council) and national level. His PhD work utilised the national SCS linked to Prescribing Information System data, Unscheduled Care Data Mart and the NHS Central Register. Additionally, David has worked with the Scottish Programme for Improving Clinical Effectiveness in Primary Care (SPICE – PC) data. In this post, David describes his PhD work and provides an outstanding demonstration of the wealth of knowledge that research using administrative data can offer. He also gives us an insight into some of the unexpected externalities that can significantly impact project timescales, but which are hard to plan for. Similarly to our previous Researcher Experience posts from Dr Catherine Hanna and Matthew Iveson, David highlights timing as one of the major difficulties he has experienced throughout his research career using administrative data. David's positivity emanates throughout this blog post and he does an excellent job at echoing the feelings that I hear time and time again from researchers in this area. Those are, a genuine understanding of the need for the legal processes in place to protect patient data, coupled with frustrations with the parts of the processes which inhibit researchers abilities to use this data to its full potential, all together with a positive attitude that things are slowly but surely improving. As David points out, things are changing in Scotland and we look forward to hearing very soon from the Chief Statisticain Roger Halliday, on the Scottish Government's plans for the new Research Data Scotland  

Brief overview of David's research

Using the linked data set described above, the focus of my research has been investigating the association between multimorbidity (more than one long-term condition) and social care receipt. I am also analysing interactions between health and social care services, with a particular interest in unscheduled care. Good social care data has been difficult to come by in the past - not just in Scotland, but internationally. I have been lucky to be one of the first group of researchers to get access to the Social Care Survey collected by the Scottish Government in a format that can be linked to health-based data sources. So far, provisional results show us that increasing age and severity of multimorbidity are associated with higher social care receipt. This was anticipated, but we have never been able to show it empirically before the cross-sectoral linkage. We have also been able to describe the receipt of social care by socioeconomic position (SEP) using the Scottish Index of Multiple Deprivation (SIMD). This is new and, to my knowledge, hasn’t been described elsewhere on such a large scale. Here we find that those with lower SEP are more likely to receive social care. (All these patterns are shown in the figure below). However, due to a lack of good measures, we can’t tell if the provision of care matches need for care. My latest piece of work has been looking at whether receipt of social care influences unplanned admission to hospital. Using time-to-event (survival) analysis we can see that, for those over 65, people who receive social care are twice as likely to have an unplanned admission (again these results are provisional at the moment).
© David Henderson
© David Henderson

Summary challenges faced 

The barriers I have faced are, no doubt, similar to others using linked data -the main one being time. Approvals, extraction, linkage etc. all takes considerable time and as a researcher you are not in control of these timescales. A good example is shown by a sub-project for my PhD which was to use social care data from one local authority area only. The council in question were exceptionally helpful and keen to share data. They were very patient whilst I organised ethics and approvals on the academic side. However, by the time I was ready to talk data sharing agreements they had operational pressures (specifically the 2017 local elections) which tied up their legal team. After this we were all hopeful about making progress, but a certain Prime Minister went for a walk in the woods at Easter and decided to call a general election! Cue another 6-week delay until the legal team could start negotiating an agreement. We eventually got there but this illustrates that the data controllers are at the mercy of higher forces as well and it is impossible to set meaningful deadlines. I am very fortunate to be in a position to keep working with my PhD data in my current role and keep asking questions of the large amount of data we have. However, I have moved university in order to this. This means I now have to repeat the process of ethics, data sharing agreements, privacy impact assessments etc. This is absolutely necessary as my current employers need to make sure that all legal aspects are covered, but there is nothing more soul-destroying than recreating the (significant) amount of work that goes into the required forms (initially completed two years previously). Fortunately, work is afoot at the Scottish Government to make this process obsolete and centralise access to research data sets – however this is still in early stages and we are currently unsure as when this will be operational or what exactly will be available. For now, the pain must endure!

My reflections

Although there are difficulties in using administrative data for research purposes and delays can be frustrating at times, it is still (incredibly) a really rewarding process. The ability to gain new insights from previously unseen data is something that should excite any researcher. More importantly, data linkage offers the potential to improve society by answering questions that can’t be asked with traditional methods. Well worth an extra ethics form (even if I grumble about it!).

Round Up of the 2019 Administrative Data Research Conference

Author: Elizabeth Lemmon

In this post, I offer my thoughts as an eCRUSADer, on the Administrative Data Research (ADR) Conference held in Cardiff between the 9th and 11th December. Given that these were three days of excellent talks and discussions, I can report that this has been no mean feat!


Overall Summary


The conference, organised by Administrative Data Research Wales (ADRW) at the University of Swansea and sponsored by Administrative Data Research UK (ADRUK), the Economic and Social Research Council (ESRC), and the Welsh Government, had the central theme of ‘Public data, for Public Good’. A theme which, most appropriately, reminds us that whilst carrying out our research, the data we are using belongs to the public and we must hold this in the forefront of our minds as we use it to generate better outcomes for them.

© ADR

The three days were jam packed with plenary keynotes, parallel sessions, rapid fire sessions, a visit to Cardiff Castle and for the super geeks- a whole lot of Rubik’s Cubing 🤓

Unlike some other international conferences I have been to, where it can be difficult to see the relevance of research from one country translate over to your own, the 2019 ADR was completely all highly relevant. In fact, a clear takeaway from the conference was the message that there is a huge amount to be learned from how things are done elsewhere.

There were so many talks I wanted to go to and as always is the way with parallel sessions, it simply wasn’t possible to get to them all. I decided to try and attend as many as possible which focussed on administrative data infrastructure and ethics in using public data. 

In this post, I offer a summary of take home messages and expand on a couple of the keynote talks and parallel session talks of interest in this area. I could have written a whole post on the excellent work presented from researchers in Scotland, including some from fellow eCRUSADers, but alas this will have to wait for another time!


ADR 2019 Take Home Messages


  • The potential of administrative data for research is huge. Especially in Scotland where linkage across several research domains is possible.
  • There is a general movement towards the use of large data repositories (or data lakes/data lochs/integrated data systems- perhaps we need to agree on one definition?) of ready linked data which will speed up access for researchers whilst maintaining public privacy and ultimately make better use of public data for public good.
  • Issues with data access, particularly concerning timing, are not unique to Scotland.
  • Some countries seem to be further forward than Scotland and the rest of the UK in this regard (notably Australia and Canada) and there is much to be learned from work going on around the world.
  • Whilst the message of public trust and transparency was front and centre throughout the conference, I felt there was little demonstration of how this is being done in practice and how, beyond using public data safely, researchers can contribute directly to building that trust.
  • Don’t be fooled by the ADR Rubik’s cube- it is a lot harder than it initially looks!

At the end of the three days, it was great to see Michael Fleming, researcher from the University of Glasgow, receiving the Best Paper Award for Evidence to Support Policy Making on his work using linked education and health data to explore outcomes for children treated for chronic conditions. Before presenting Michael with his award, Emma Gordon, Director of ADRUK, acknowledged the long wait (from memory just under 2 years) before Michael got access to the data for his research and asked the audience:

“Can you imagine having to wait so long for data?”

Sadly, we certainly can. In fact, there are a considerable number of us here in Scotland (and almost certainly elsewhere) who have.

It was great to hear Emma highlight this and I do think that the conference really sent a message of hope to eCRUSADers and researchers more generally, that things are improving in Scotland. It was certainly motivating to see the future of administrative data research already being put into practice in many countries around the world. But, there is still a long way to go.

In the meantime, you should definitely join eCRUSADers to hear the latest on the administrative data front and get in touch to share your experiences so that we can all learn from them.

Hit subscribe at the top of this page!


Keynote talks to highlight 


Garry Coleman, Associate Director of Data Access at NHS Digital

The first keynote presentation on day one was given by NHS Digital’s Garry Coleman who, despite being likened to Scrooge and Smaug the dragon, fiercely guarding NHS administrative data, outlined the suite of changes that NHS Digital have made over the last year in order to improve access to NHS data for researchers. These included the introduction of a fast stream service for repeated applications and those with precedent, published ‘standards’ to help researchers know what is expected from their application, and the establishment of Data Access Environment (DAE). DAE is the new cloud technology in England whereby researchers can access patient data for research without the need for the data to leave NHS Digital. The platform went live in May 2019 and aims to provide researchers with faster access to ready linked data sets with built in tools for more powerful data visualisation and analysis. There’s a YouTube video on it here.  It all sounds very good, as well as very familiar. I wonder how will this platform compare to Research Data Scotland? Whilst ensuring public trust in all that we do with public data, was at the heart of Garry’s talk, it was not clear how much, if any, public engagement by NHS Digital has been done around the use of the new DAE system. I’ve had a quick peruse of the NHS Digital website and can’t see any evidence of it on there either. Perhaps it is there somewhere and I am missing it? In any case, given the need to be transparent and ensure that public trust is at the heart of using administrative data for research, we perhaps need more than the hope that the public are aware and are happy for this to be going ahead. What was clear from Garry’s talk was that he was actively seeking feedback from the research community on how they have found the data access processes of NHS Digital and he expressed a genuine interest in making things easier for researchers.

John Pullinger, Former Head of the Government Statistical Service and Chief Executive of the UK Statistics Authority. 

On day two, the first plenary keynote was from John Pullinger, who offered his thoughts on “Lots of lovely numbers but why does everyone make it so difficult?” Clearly, John has an immense amount of experience in this field and he did an excellent job of taking us on a journey with him from the 70s when he first began working with the limited administrative data that was available then, to present day where administrative data are all around us. John’s message was clear, for us to have a social licence to operate with the public’s data, it is incumbent on us to earn their trust. This is in fact just as important as our research itself. He highlighted the importance of seeing legislation around the use of public data like GDPR, as enablers to research rather than impediments. Finally, John pointed out the need to be realistic with what the data can tell us and not to say something more than what the evidence tells us. Once again, this comes back to the need to earn the trust of the public and not doing anything that might undermine that trust.

For me, John really instilled in my mind the fundamental need to remember whose data we are using and that we are very much still on the journey to earning their trust.


Parallel sessions to highlight


In this talk, Robert McMillan talked about the Georgia Policy Labs, a ‘data lake’ which hosts many ready linkable administrative data sets for policy makers and researchers to access and analyse to conduct research on a number of key policy areas. Robert highlighted the secure cloud infrastructure, separation of duties and secure data rooms which ensure that data are stored and used in a safe way. He also mentioned the ‘master data sharing agreement’ which they have to allow access to this data lake. Time was tight so there wasn’t really time to go into detail on this, though I am sure the Scottish Government would be interested to know more as they work towards implementing Research Data Scotland. In her talk, Anna Ferrante discussed her work in merging the Data Linkage Western Australia and the Centre for Data linkage to form the Population Health Research Network (PHRN). The PHRN is a national network of data centres which links data collected across Australia on the entire population. Its infrastructure allows for the safe and secure linkage of data collections across a wide range of sources. Like some of the other talks throughout the conference, Anna talked about the Bloom filter structure PHRN uses to probabilistically and anonymously link between administrative datasets. Not surprisingly given the amount of research that comes out of Australia using linked administrative data, Anna’s was one of many Australian talks which highlighted the level of maturity of the administrative data infrastructure in Australia compared to Scotland. Michael Schull talked about a project he is involved in that is building a partnership between health service researchers and computer scientists to develop a high-performance computing platform for the analysis of large linked administrative datasets. The goal of the partnership is to use artificial intelligence and machine learning to improve health and health care, which of course requires an infrastructure that has the power to store and manage large quantities of data. Michael talked about some of the things they have learned from working with computer scientists. Michael stated that the hardware for this infrastructure was in fact the easy part.... Della Jenkins presented on the work being carried out at Actionable Intelligence for Social Policy (AIS), an organisation which works with state and local governments to implement Integrated Data Systems that link administrative data across government agencies. Della’s talk reiterated many of the messages that John Pullinger had highlighted in his Key Note speech earlier in the day. Namely, as the emergence of initiatives for integrated administrative data (AKA linked administrative data) in research continues to grow, it is vital that we build awareness and infrastructure with public involvement every step of the way. The group have a very useful report and toolkit on their website: Tools for Talking (and Listening) About Data Privacy for Integrated Data Systems. Although the report is aimed at government agencies and their partners who are using linked administrative data, the content is helpful more generally in terms of steps to develop a social licence for using linked public data.  It’s well worth a look. Andy Boyd gave a great talk on work that was  carried out by Closer and NHS Digital looking at the possibility of different infrastructures for the onward sharing of longitudinal study data that is linked to administrative records, which currently cannot be released outside of the cohort study institution. The work identified five onward data-sharing models and concluded that although greater clarity is needed in order to effectively share anonymised data and to do so internationally, there are opportunities for developments and the large community of longitudinal cohort studies in the UK might be able to facilitate part of those processes. Full report available here. One of the final talks I went to was from Mike Robling at the University of Cardiff. I’d been looking forward to going to this talk because I had already heard of the CENTRIC study, work which hopes to develop “training for UK researchers that enhances their understanding of public perspectives and governance requirements and improves their practice when working with routine data”. In his talk, Mike outlined the results from the focus groups with stakeholders, workshops with members of the public, and online survey filled in by researchers. In summary, the study found that there is both a need and an appetite for training researchers in public engagement and in the complex regulations and requirements around using routine data for research. I very much look forward to seeing the training resources that CENTRIC produce and think they will help to fill the existing gap when it comes to researchers improving public engagement and public trust. 

One final reflection on privacy 


This post may have been more aesthetically exciting if I had been able to fill it with photographs of the speakers whom I went to see present. Sadly, it isn’t because none of the speakers said one way or the other if they were happy to be photographed. Given the nature of the conference, I decided to err on the side of caution and assume that this meant the speaker had not given consent to having their photograph taken and plastered on social media. Of course not everyone shared this view, which made me then wonder if I was silly not to take any photographs in the first place. Or maybe I should have asked the speaker before if they would mind. And so I wonder, in the name of transparency, might it be possible for speakers to quickly mention at the beginning of their talks if they are happy to be photographed? Just an idea. Here's me giving my talk 🙂

The full 2019 ADR conference proceedings are available here where you can access abstracts of all talks from the three days. Thanks to all of the organisers for putting on such a great event and to the speakers for sharing their exciting work. 

As always, I would love to hear your thoughts on this so please comment/share/email me!

Researcher Experience: Dr Catherine Hanna

This week we hear from Dr Catherine Hanna, Research Fellow and PhD student (Cancer Research UK Clinical Trials Fellowship) at the Institute of Cancer Sciences, University of Glasgow. For about one year, Catherine has been working with Greater Glasgow and Clyde (GG&C) Chemocare data linked to both Scottish Cancer Registry (SMR06) and Cancer Quality Performance Indicators (QPI) data. She also has approval from the Public Benefit Privacy Panel (PBPP) (approval granted in June 2018) to obtain a national linked cancer data set for her project. She is currently awaiting access to this data. In this post, Catherine tells us a bit about her research and what she has done with the GGC data, as well as the challenges she has faced in terms of applying for and getting access to the national data. 

Brief overview of Catherine's research

My research investigates how we can assess the impact of oncology clinical trials. It is important to be able to demonstrate that trials testing new oncology treatments are having real life impacts such as changing practice, changing health and saving money. Analysing this impact helps us to identify which trials are making real world differences, and subsequently, to design more impactful trials in the future. I am conducting a case study to assess the impact of the Short Course Oncology Treatment (SCOT) trial (1). This study investigated if treating patients with a diagnosis of colorectal cancer with 3 months of chemotherapy following surgery was non-inferior to treating with 6 months of chemotherapy. The trial results have shown that giving a shorter duration of treatment does not make a significant difference to the percentage of patients who are disease free at 3 years. Patients in the 3 month arm of the trial also had significantly less side effects from the treatment, especially with regards to peripheral nerve damage. Gaining access to GGC Chemocare data, linked to QPI and SMR06 data sets, has enabled me to assess the impact of the SCOT trial on changing clinical practice. There was a significant change in prescribing practices for patients with colorectal cancer after the results of the SCOT trial were publicised. This will translate to a cost saving for the GGC health board and will result in less patients in GGC experiencing debilitating peripheral nerve damage as a result of their adjuvant chemotherapy treatment. A poster with the preliminary results of this analysis was presented at National Cancer Research Institute 2018. In the next stages of my project, I plan to investigate the impact of the SCOT trial on prescribing on a national scale by using routinely collected chemotherapy data from the three cancer networks in Scotland (South East Scotland (SCAN), West of Scotland (WOSCAN) and North of Scotland (NOSCAN)). My project is running alongside, and will be using a sub-set of, the COloRECTal Repository (CORECT-R) data at the University of Edinburgh (part of an even wider project at the University of Leeds).  PBPP approval for my project was granted in June 2018, however, I do not yet have access to this data.  Below, I outline some of the lessons I have learned during the application to access this national data.

Summary challenges faced 

(1) When data is on databases out with Information Services Division (ISD), often at a local or regional level, this makes the process of data linkage more challenging and costly. Often, there is not the expertise at a local level to extract and transfer data and working relationships between local analysts and those coordinating data linkage centrally do not exist. Specifically, there are few examples of previous linkage of chemotherapy prescribing data (held locally) on a national scale. (2) Data linkage requires a pre-specified list of the data variables from each data set. Often these lists are not publicly available, or even defined, and it can be time consuming and difficult to generate variable lists which are required for the data linkage process. (3) Evidence of funding to perform data linkage and make use of national linkage services is often required for PBPP approval. However, depending on the time between submission and the data linkage occurring, it can be several years before the funds are used. (4) If a researcher is funded for a specified period, the time taken for PBPP approval and data acquisition means that the researcher may not have an opportunity to analyse the data. There is also a risk that the research question will be less relevant than at the time of submission.

My reflections

There is huge potential to use routine data to improve the way we do clinical trials and ultimately to improve outcomes for patients. The potential to pioneer the use of routine data for research purposes in Scotland is obvious; however, the practicalities of currently accessing and using this data are not straightforward. My advice for anyone planning to work with national Scottish data, based on my experience:
  • Apply for access to data early and be aware that data acquisition may take longer than expected depending on your project.
  • Think about the costs of data linkage, especially if you want to link data sets that are not currently stored in ISD. The size and subsequent cost of a data linkage project is often based on the number of databases used (especially those outside ISD), rather than on the size of the finalised database.
  • Define which variables from the data set you will require early and be clear why you require each variable for your analysis.
  1. Iveson TJ, Kerr RS, Saunders MP, Cassidy J, Hollander NH, Tabernero J, et al. 3 versus 6 months of adjuvant oxaliplatin-fluoropyrimidine combination therapy for colorectal cancer (SCOT): an international, randomised, phase 3, non-inferiority trial. The Lancet Oncology. 2018;19(4):562-78.

Researcher Experience: Matthew Iveson

Our first Researcher Experience post is from Matthew Iveson, Senior Data Scientist at the University of Edinburgh. Matthew has been working with Scottish administrative records for about four years. Data sets he has worked with include Scottish Morbidity Records, Scottish Census, Prescribing Information System, NHS Central Register, NRS Births, Deaths and Marriages, Scottish Stroke Care Audit. He has also worked with the Scottish Longitudinal Study, a set of pre linked administrative data sets. We asked Matthew to tell us a bit about his research and the routine data he has worked with, what he saw were some of the key challenges in accessing and using administrative records, and to offer his thoughts to early career researchers hoping to work with this kind of data. 

Brief overview of Matthew's research

My work has mainly focused around using data linkage to reconstruct the life-courses of individuals who took part in the Scottish Mental Survey 1947, a nation-wide survey of age-11 thinking skills conducted in Scottish Schools in 1947. These individuals, now aged over 80 years-old, have experienced a lifetime of changes in health and socioeconomic circumstances, and are an extremely important opportunity for examining how early-life circumstances can have a lasting impact on health and wellbeing across the life course.  So far, I have used linked data to show that individuals with higher childhood cognitive ability, better socioeconomic circumstances and more education are less likely to die, less likely to report a long-term function-limiting illness in older age, more likely to be economically active in later life, more likely to retire later and of their own volition, and so on. I’ve also tried to establish the mechanisms by which childhood advantage affects health and wellbeing. I am currently waiting for data to examine whether factors from across the life course can be used to predict whether someone will require care in later life (including the type of care required), how well individuals can recover from a stroke, and whether someone will respond to a given antidepressant medication. 

Summary of challenges faced

One of the biggest issues I faced was in terms of timing. In some instances I have been waiting over 3 years for data. There have been several delays along the way, due to changes to the data access process (both over time and between organisations), queues for submitting forms to data controllers, changes to the legal landscape for data sharing (such as GDPR) and loss of submitted paperwork. The problem is that these delays are relatively common, and they result in a timescale that is not achievable under normal funding conditions. Since most early-career researchers find themselves on short-term contracts, they risk not getting data before their contracts expire, and since they are judged more than most on their productivity, these delays can seriously hamper a researcher’s career trajectory.  The delays also highlight the fragility of the data access process. Getting to know key people in each organisation is one of the best ways to get through the process smoothly, but if these people leave their expertise often go with them. One example is that, during my project, the lawyer in charge of reviewing requests for census data left. Their replacement was understandably less confident about data sharing, and decided to re-review the laws surrounding the use of census data for research. Data controllers and other involved organisations need to ensure that knowledge and expertise are distributed across their teams, and need to invest in the infrastructure and staff that can ensure a robust system for the future.  

Thoughts for early-career researchers 

While organisations need to make things easier, researchers themselves need to manage their own expectations – gaining access to routinely-collected data, especially linked data, takes a very significant amount of time and effort. It’s worth planning well in advance and making sure that you can stay busy and productive while you wait for data to arrive. It’s also worth thinking about pre-linked datasets such as the Scottish Longitudinal Study if you’re short on time. Regardless of how you engage with routinely collected data and how long it takes, bear in mind that you’re learning an incredibly rare and valuable set of skills. Things are slowly getting better, faster and easier, but organisations are still fine-tuning their processes and a lot of the data is still new to the research scene. If you do have the time – and the perseverance – then administrative data is an extremely powerful tool that will help you to answer the largest and most difficult questions faced by society.   

Welcome!

Welcome fellow Early Career Researchers Using Scottish Administrative Data- now known as eCRUSADers! For this first post, I thought I would briefly introduce myself before telling you more about what this blog is all about. My name is Elizabeth Lemmon, I am a Research Fellow working at the University of Edinburgh. I've set up the eCRUSADers blog on the back of numerous conversations I have had over the years with fellow researchers and colleagues which have all pointed to the need for a sharing of information and discussion about working with Scottish administrative data. I manage the eCRUSADers blog, alongside Matthew Iveson, Senior Data Scientist at the Centre for Cognitive Ageing and Cognitive Epidemiology, also within the University of Edinburgh. 

Is this blog going to be of interest to me? 

Answer the following questions: 
  • Do you work with or want to work with administrative data (Scottish or otherwise)?
  • Do you want to hear about interesting research that is going on (in Scotland and further afield) which uses administrative data?
  • Are you interested in possible training opportunities for working with sensitive and complicated administrative data sets?
If you've said yes to any of those, keep on reading!

Why is there a need for the eCRUSADers blog?

Currently, Scotland is in a unique position to produce population level research due to the way it routinely collects information about Scots across a number of key domains – health, education, social care etc. Additionally, these data sets can be linked together, creating an invaluable source of information to carry out social research, which could ultimately have a positive impact on the lives of people living in Scotland and further afield. However, navigating the administrative data landscape is complex, working with administrative data is tricky, and the resources with which to carry out these tasks are scarce. These issues are particularly challenging for Early Career Researchers (ECRs) who have limited time and often knowledge about how to traverse this landscape. The eCRUSADers blog will provide somewhere for them to start.

The purpose of the eCRUSADers blog?

The purpose of the eCRUSADers blog is three-fold:
  • To provide a platform for the sharing of information and experiences
  • To enhance our understanding of what is working and where there is room for improvement
  • To encourage discussion around what can be done to keep Scotland on the trajectory of becoming a world leader in research using administrative data
Blog posts will consist of researcher experience posts; discussions of academic articles of interest (from Scotland and beyond); discussions of relevant training/resources; round ups; contributions from non-ECRs working with or with an interest in Scotland's administrative records; and anything else of eCRUSADers interest that crops up.  Overall, the blog will provide a place for ECRs to go if they are thinking about working with Scottish administrative records and want to learn from the experiences of others. At the same time, you don't even have to be an ECR! The eCRUSADers content is of relevance to anyone apply to or working with administrative data in Scotland. What is more, the lessons learned in Scotland should also translate over to other jurisdictions, meaning that even if you aren't working specifically with Scottish data, you can most likely still benefit from the eCRUSADers content!  As with any newly established blog, we plan to allow the blog to grow organically depending on changes occurring on the administrative data front and on the type of content ECR's provide and want to see.

Who can I contact to find out more? 

If you want to find out more or contribute to the blog we would be very happy to hear from you. Please get in touch by sending an email to ecrusad@ed.ac.uk or get in touch with Elizabeth at elizabeth.lemmon@ed.ac.uk. You can also sign up to receive notifications of new eCRUSADers content by hitting subscribe at the bottom of this page!