Quality Culture at UASVM

From EiWiki

(Difference between revisions)
Jump to: navigation, search

Admin (Talk | contribs)
(Created page with 'In overall terms, the IEP team was interested in assessing the stage of development reached in the area of quality at UASVM. The team noted that the university’s SER identified...')
Newer edit →

Current revision as of 10:53, 1 February 2014

In overall terms, the IEP team was interested in assessing the stage of development reached in the area of quality at UASVM. The team noted that the university’s SER identified “high quality education” as being amongst the draft goals drawn up under the rector following his appointment in 2012. The team also observed that the university wished its approach to quality assurance to cover all activities, both academic and non-academic. As is explained in this present section of the team’s report, and as is acknowledged by the university, quality assurance practices, and hence quality culture, are not yet as well established or as well organised and embedded as the university would wish. While the team noted that steady progress is being made in several areas, it was evident that formal measures for quality assurance and quality control had been introduced relatively recently, in 2006. The implementation of the ISO:9001 model commenced in 2007. Though procedures for teacher evaluation and peer evaluation have been in use since 2008, these have been revised under the present rector for implementation from October 2012. Moreover, as is noted below, the university’s quality procedures to date are largely influenced by external forces, and there is scope for being more proactive in evolving a “UASVM” quality philosophy and approach through the introduction of new or enhanced procedures.

On the basis of the foregoing, the IEP team identified five important areas of quality assurance and quality evaluation at UASVM that team members wished to explore in depth, including: the operation of quality procedures and processes at faculty and institutional levels; developments to support student representation and student involvement in quality processes, such as student evaluation of teachers; procedures for the evaluation and appraisal of teachers; and processes to support the self-critical internal review and evaluation of study programmes and academic provision. Underpinning and informing all of this, the team was also interested in examining the coherence of the university’s approach to quality assurance and evaluation, and the extent to which an integrated approach to both administrative and organisational quality management on the one hand, and academic quality on the other hand, was being achieved.

During the team’s visits, consideration was given to the external parameters for quality assurance and accreditation, which are laid down by ARACIS, the national body responsible for the national programme of institutional evaluation and programme accreditation in Romanian higher education institutions. Each programme undergoes an ARACIS evaluation on a five-year cycle. UASVM completed an ARACIS institutional evaluation in 2010 and obtained “highly trusted” status. The team learned that ARACIS requirements and guidance plays a major influencing role in the university’s approach to quality, at both institutional and study programme levels. For the former, there is an expectation that UASVM, in common with other universities, will have in place appropriate evaluation arrangements, while for the latter there are quite specific criteria, standards, indicators and guidelines for checking courses. The ability to continue to meet ARACIS requirements is therefore a central consideration for the university going forward, as it develops its approach to quality matters. Accordingly, the team looked closely at the implementation of structures and quality processes that enable it to meet these external requirements. At the top of the organisation the vice-rector for education and quality management holds institutional management responsibility for quality matters, including supervision of the quality assurance department. That department undertakes the administrative oversight of the evaluation of teaching staff, including completion of summary reports upon which faculties are required to act. The department also contributes to quality assurance more generally, including support for the implementation of the ISO:9001 quality management system. The vice-rector also leads the Academic Council, which holds responsibility for academic affairs. However, the team noted that a key element of governance and management relating to quality is the role played by the Senate Commission that includes responsibilities for quality assurance and evaluation.

Until 2012, this role had been performed by the Commission for Quality Evaluation and Assurance (CQEA). In January 2013, responsibility was transferred to the Senate Commission for Education, Research and Quality Management, a body of which the vice-rector is not a member. Responsibilities of this recently re-structured commission are quite wide, and include overseeing internal processes for quality assurance and evaluation, and also undertaking an advisory role to Senate. The commission also undertakes periodic and annual evaluations of how effectively faculties are assuring quality, and introduces measures for improvements by academic departments, as appropriate. The team noted, however, that the Senate Commission does not contain faculty quality representatives. Given that key elements of these arrangements are still relatively new, and given the importance of effective interfaces between the operation of the Commission, on the one hand, and the leadership and direction on quality matters, including quality monitoring, that the vice-rector is required to provide on the other hand, the IEP team wish the university well in taking these arrangements forward.

The team also explored the operation of quality assurance, evaluation and monitoring at the levels of faculty and department. The team noted that deans of faculty, heads of department, and study programme coordinators each hold responsibilities for quality assurance and evaluation processes at their respective levels in the organisation. Faculty operational plans contain objectives relating to education and quality, but the principal mechanism for quality is the permanent Faculty Commission for Quality Evaluation and Assurance (CQEA), which is also mirrored at the level of department. The IEP team was informed that each faculty CQEA has in place mechanisms for checking quality problems, including student issues, across the faculty, and that they undertake annual evaluations of teaching quality with reports and action plans being forwarded to the Senate Commission. The team was advised that the CQEA is also responsible for ensuring that action plans from quality reports are implemented. At department level, each quality commission typically contains three members with responsibilities for quality assurance.

However, having considered all these arrangements, the team noted from the example provided that faculty annual quality reports focused almost exclusively on summaries of student evaluations of teaching. Other quality matters, such as student achievement data, assessment issues, staff development opportunities, or enhancement plans, did not seem to be addressed in these reports. This therefore led the team to question how the Senate Commission was able to exercise the necessary oversight of all aspects of quality monitoring at faculty, department and study programme levels. With this in mind, when matters relating to responsibility for quality oversight at the level of the Senate sub-committees have been finalised, the IEP team strongly recommends that a robust and transparent accountability mechanism is put in place for ensuring that faculty quality reports are monitored effectively. Other matters that drew the attention of the IEP team included the arrangements that have been put in place for teacher evaluation and peer evaluation, and also for student representation and the involvement of students in quality assurance processes. The team was encouraged to see such developments. Regarding student representation and involvement, however, while the legal requirement stipulates that students are entitled to 25% representation on the main governance councils, at institutional and faculty levels, as noted in section 2 earlier, the team learned that this entitlement did not extend to the permanent commissions at faculty level.

The IEP team noted that the university’s peer evaluation scheme involves each member of teaching staff being evaluated on an annual basis by two colleagues of equal rank. The template invites comment on the teacher’s pedagogic skills, scientific output, extra-curricular activities, and relationships with colleagues. Teachers are rated from “poor” to “excellent”. Linked to this, teaching staff undertake self-evaluation, and evidence from these processes can be used for promotion purposes through consideration by the Promotions Commission.

The results of the evaluations are considered by the head of department for management and performance monitoring purposes. The head of department evaluates each academic and meets privately with each member of staff to discuss that individual’s results. In this process, account is also taken of the outcomes of student evaluations of professors, the process for which is discussed below. The collated results of peer evaluations are also considered at faculty and department councils, while a summary report on all such evaluations is considered by the relevant Senate Commission. In reflecting on this peer evaluation process the IEP team noted that it was still relatively new. In acknowledging this, the team observed that the scheme is primarily focused on performance management. Team members took the view that, in due course, the university may choose to introduce a more developmental element to the process. In the view of the team this could be achieved through adding peer observation of an individual’s teaching, where two colleagues might evaluate each other, on a confidential basis, and agree to share ideas about pedagogic good practice and student-centred learning.

The IEP team obtained further insights into the use made of quality evaluation by the university by focusing on the operation of the recently revised scheme for student evaluation of teaching. The team learned that all students who have an attendance record of over 50 per cent are able to provide anonymous feedback at the end of each semester and that response rates can be as high as 70 per cent. However, on examining the feedback form, the team observed that the requested evaluation focuses only on “teaching”, the “course”, and some student self-evaluation, but does not include student learning. From the team’s perspective, and particularly in view of the importance attached by UASVM to the quality of the student learning experience, this is a matter upon which the university should reflect as it reviews the effectiveness of the template and the overall process. Information from the evaluations is received by the head of department and where student ratings and grades are lower than is acceptable, a meeting will take place with an individual teacher. Grades for each professor are also made public at meetings convened by heads of department. Overview reports are drawn together for consideration at faculty council and Senate levels, and information also feeds into the annual faculty quality report.

The team was especially interested in taking a close look at the use made of this procedure and the information it produces, including arrangements for providing feedback to students on the issues they raise, and to explore whether the objectives of the process were being fully met and how far this was being monitored by the university. Discussions with students and staff indicated that there were mixed views and experiences. A number of students with whom the team met indicated that they were not aware of what happens to the feedback they provided. Even where students indicated awareness of some contexts where feedback outcomes were considered, as noted earlier, students were neither involved nor represented in those discussions. Therefore, while there are procedures and steps taken to make summary reports available, such as on the university intranet, or through the proceedings of faculty councils, and while some management actions are taken to address cases of poor teaching, it was not clear to the IEP team (or to students) how students are informed of actions taken on specific issues that affect them. Moreover, it appears that what students find out and how they do so, can be quite variable. Added to this, the team was unable to establish which, if any, institutional body (such as the Senate Commission responsible for quality matters), took steps to ensure that the feedback loop is closed or to ensure that the objectives of the process were being met. Therefore, while recognising the opportunities for students to provide anonymous feedback, the IEP team advises the university to reflect on the use made of teacher evaluation surveys, with a view to developing more analytical and action-focused summary reports, and also ensuring that mechanisms are put in place across the university, its faculties and departments, for informing students of actions taken to “close the loop” in response to their concerns and the feedback they provide.

The team’s deliberations on the university’s quality review and evaluation processes also took into consideration the extent to which procedures for annual monitoring and review of curriculum and learning and teaching matters was undertaken at the level of the study programme, or at the level of each student cohort. From quality reports referred to in the university’s documentation, or from examples made available to the IEP team, it was apparent that most emphasis was placed on faculty or department level summary evaluations or evaluation focused at the level of the individual teacher. From the evidence made available, and through discussions with academic staff, the team formed the view that where annual or periodic review took place at the level of the individual study programme, this was driven for the most part by the criteria developed by ARACIS, and also by the quinquennial review and accreditation cycle of that body. While recognising the progress being made in the various evaluation processes discussed above, the team formed the view that the university should be more proactive in devising its own approach to annual monitoring and periodic review and that this should place emphasis on self-critical evaluation by study programme teams on matters such as learning, teaching and assessment, student-related data, student feedback and improvement plans. In the view of the IEP team this approach could accommodate the expectations of ARACIS while simultaneously reflecting the university’s own needs for quality monitoring. Drawing on the experience of other universities, the IEP team puts forward the view that an effective system for annual monitoring and evaluation should incorporate evaluation by those nearest to the student experience, namely, all members of study programme teams. Therefore, as the university seeks to encourage the ownership of quality processes and the development of a quality culture, the IEP team advises that the capability for self-critical analysis of academic provision should be strengthened by the introduction of a procedure for the annual monitoring and evaluation of each study programme by study programme coordinators and their teams.

In completing their assessment of progress towards the development of a quality culture, the IEP team considered all of the steps taken to date in the development of quality assurance processes. In reflecting on the need to ensure “fitness for purpose” of quality assurance arrangements in the context of UASVM, the team considers that there is a need to identify a framework for academic quality assurance and enhancement to complement it being developed in the area of organisational quality management. The team noted adoption of the ISO 9001:2001 quality model to assist improvement in organisational effectiveness, and acknowledge that such a model can bring benefits in the area of administration and general quality management. For example, the team was persuaded that this model is helpful to the university in preparations for ARACIS accreditation and evaluation where sound document control procedures are essential. Nevertheless, the IEP team notes that the application of quality models that have their origins in the world of business, commerce or industry, may not necessarily facilitate a clear focus on learning and teaching and the student learning experience, and may not entirely fit with all the requirements of a university.

Bearing these matters in mind in their discussions with staff at all levels of the university, the team members noted that there is very little awareness of academic quality frameworks such as the Standards and Guidelines for Quality Assurance in the European Higher Education Area, otherwise known as the European Standards and Guidelines (ESG). Therefore, as it seeks to elaborate its quality philosophy, and as it works towards developing an integrated quality system that is fit for academic purposes, the university may wish to reflect further on the merits of Part One of the ESG. The standards and guidance contained therein, on matters such as the approval, monitoring and periodic review of programmes and awards, the assessment of students, the quality assurance of teaching staff and student support may be useful as reference points for the vice-rector and her colleagues.

Therefore, while noting the use and implementation of an approach to quality management and administration based on the ISO 9001 model, the IEP team strongly recommends that as the university develops its proposed five-year quality strategy, it should broaden its focus on quality by developing a framework and set of principles for academic quality assurance and enhancement which draws on Part One of the European Standards and Guidelines.

Personal tools