Data Management Maturity Model V1.0.pdf _TOP_
The program centers around the Data Management Maturity (DMM) model, a comprehensive framework of data management practices in six key categories that helps organizations benchmark their capabilities, identify strengths and gaps, and leverage their data assets to improve business performance.
Data Management Maturity Model V1.0.pdf
Delivering effective institutional support for research data management (RDM) is a challenge for any HE institution, regardless of size or research intensity. Typically, support should include both technical and human infrastructure, with ownership of individual elements distributed across the institution. Ensuring that RDM service development takes as comprehensive a view as possible and engages effectively with relevant stakeholders is key to successful research data support.
RISE was created primarily for higher education institutions to help them to take stock of their current RDM support provision and identify areas of focus for future development. This process is typically administered by someone from within the institution with significant experience of the local research support infrastructure, and a good understanding of the wider issues associated with supporting data management. Regardless of who manages the service review, input is likely to come from representatives of the Library, Research Office and IT, and may include input from other areas. One of the advantages of using RISE is that it provides a means of engaging these stakeholders in productive discussion about service development and allows them to reach a shared vision of where the RDM service aims to be.
Probably the most widely established model for service development is the Capability Maturity Model (CMM) developed at the Software Engineering Institute (SEI) at Carnegie Mellon University (Paulk et al., 1993)[ii]. In the CMM approach, the scale represents maturity, i.e. the level of organizational capability to reliably perform the process. This maturation reflects the extent to which each process is institutionalized and managed, ideally with quantified measures enabling continuous process improvement.
In many capability models the scale represents maturity, i.e. the level of organizational capability to reliably perform the process. This maturation reflects the extent to which each process is institutionalized and managed, ideally with quantified measures enabling continuous process improvement.
The model reflects the high level of diversity among UK research institutions, ranging from the highly research-intensive to those conducting little research, or which specialise in certain disciplines. This is not unique to the UK. Institutions worldwide are addressing funder and community expectations of broader research data sharing, but the appropriate level of response is largely defined by the institutional context. It would be unrealistic to expect every institution to provide the same level of service capability across every element of RDM support. RISE aims to help institutions identify which capabilities are appropriate for them and therefore which areas to prioritise in their service improvement planning.
The level of service capability that it is feasible or desirable to deliver will depend on the institutional context. Where it might be considered essential that a large research intensive institution provide an in-house data publishing platform, an institution with a modest research capacity may be better to consider outsourcing or sharing aspects of the service, including the repository platform itself, with other institutions or an external provider. The RISE capability model aims to recognise this contextual difference by providing three possible levels of service capability, using compliance with the main policy expectations of research funders, and legal requirements as a starting point. It should be noted that while the levels offer a progression in terms of service capability, RISE does not assume that more is better. The level of capability offered should be proportionate to costs that are justifiable, considering local research strategy, available resources, and likely demand for the relevant services.
Aiming to ensure that institutions could use RISE in a variety of contexts, the DCC engaged with 16 UK HE institutions to test its relevance and utility. Applications ranged from using the tool as a framework for a semi-structured interview with RDM service managers and selected central support staff, to using it in a group workshop session to discuss data publication needs. The RISE outputs from this session informed a more detailed assessment of shortlisted platforms, based on capabilities set out in ReCap, a sister DCC model for evaluating data repositories. [vii]
Producing a formal report as part of the RISE process is optional; some users of the tool have simply found it a useful tool for initiating conversations between RDM stakeholders to reach a consensus about the service. For others, RISE has proved a useful tool for identifying gaps in support provision and aiding prioritisation decisions, contributing to the development of roadmap documents. Working through the RISE framework uncovers useful information about the case for service development that can be incorporated into business plans. Use of the RISE framework alongside its sister model ReCap can also help scope high-level requirements for data repository platforms to help progress to more detailed discussions around platform selection.
In the European Open Science Cloud (EOSC) pilot project, DCC is working with partners to develop an integrated competence and capability framework, informed by shared experience in supporting service development and validating EOSC services. The framework aims to help organisations plan for the effective deployment of services that European Research Infrastructures offer researchers to better enable data science. Institutional research data services will also need to be aware of EOSC services and help researchers use them. Joining up competence frameworks in data science and data management, ,[xii],[xiii] the EOSC framework will help organisations ensure the right training is included in service development roadmaps, and in the career development plans for relevant staff.
[vi] Cox, A. M., Kennan, M. A., Lyon, L., & Pinfield, S. (2017). Developments in research data management in academic libraries: Towards an understanding of research data service maturity. Journal of the Association for Information Science and Technology. Available:
A maturity model is considered as a desired or anticipated evolution from a more ad hoc approach to a more managed process. It is usually defined in discrete stages for evaluating maturity of organizations or process (Becker, Knackstedt & Pöppelbuß 2009). A maturity model can also be developed to evaluate practices applied to individual data products (e.g., Bates and Privette 2012; Peng et al. 2015). A number of maturity models have been developed and utilized to quantifiably evaluate both stewardship processes and practices.
This article provides an overview of the current state of assessing the maturity of stewardship of digital scientific data. A list of existing or developing maturity models from various perspectives of scientific data stewardship is provided in Table 1 with a high-level description of each model and its application(s) in Section 3. This allows stewardship practitioners to further evaluate the utility of these models for their unique stewardship maturity verification and improvement needs.
Figures 1 and 2 display different perspectives of maturity within the context of managing scientific data stewardship activities. They highlight the interconnectivity and interdependency of different levels of stewardship activities within individual organizations and different types of maturity for scientific data products through the entire data product lifecycle.
Category of tiered maturity assessment within the context of scientific data stewardship and examples of existing maturity assessment models. The arrows indicate that the maturity at the initiation point can impact that at the ending point. See Section 3 for a high-level description of each maturity assessment model listed in the diagram.
Category of data product lifecycle-stage-based maturity type and examples of existing assessment models in the form of a matrix. See Section 3 for a high-level description of each maturity assessment model listed in the diagram.
Table 1 provides a list of existing maturity assessment models, including those highlighted in Figures 1 and 2. Brief descriptions of these models and, where available, their applications are provided in the next section.
McSweeney (2013) reviewed four leading business data management maturity assessment models and concluded that there is lack of consensus about what comprises information management maturity and a lack of rigor and detailed validation to justify organization process structures. He called for a consistent approach, linked to an information lifecycle (McSweeney 2013).
The Enterprise Data Management Council (EDMC) Data Management Capability Assessment Model (DCMM) was released in July 2015 (EDMC 2015). DCMM defines a standard set of evaluation criteria for measuring data management capability and is designed to guide organizations to establish and maintain a mature data management program (EDMC 2015; Gorball 2016). A detailed description and comparison of CMMI DMM and EDMC DCMM can be found in Gorball (2016).
The trustworthiness of individual repositories has been the topic of study for the data management and preservation community for many years. Based on the Open Archival Information System (OAIS) reference model, ISO 16363 (2012) establishes comprehensive audit metrics for what a repository must do to be certified as a trustworthy digital repository (see also CCSDS 2012a). Three important qualities of trustworthiness are integrity, sustainability, and support for the entire range of digital repositories in three different aspects: organizational infrastructure, digital object management, and infrastructure and security risk management (ISO 16363 2012; CCSDS 2012b; Witt et al. 2012). A detailed justification for transparency is now recommended in the ISO 16363 repository trustworthiness assessment template.