Quality culture at LBU

From EiWiki

Revision as of 20:52, 2 February 2014 by Admin (Talk | contribs)
(diff) ← Older revision | Current revision (diff) | Newer revision → (diff)
Jump to: navigation, search

In part this was attributed to the failings of the IT systems meant to support the quality process. The team learnt that one of the key information systems in this respect, QUANTIS, was viewed with limited confidence by many staff and that there was a desire to migrate to new software, possibly through a Google-based platform. In response to these difficulties one dean indicated that he relied on information on the quality of provision coming from other sources, for example, conversations with students, a complaints box, the completion of forms on the internet, and talking to colleagues.

In these circumstances it was difficult for the team to see how there could be formal and proper consideration of the key indicators relating to quality. In addition it was not clear to the team how data on student retention, progression and achievement was considered alongside feedback from students. Teaching staff volunteered the view that, depending on the faculty, between 15-20% of a new cohort would fail to complete their course. This excluded those students who failed some part of the course but continued their studies by repeating the failed elements.

The university’s SER expressed a view from students that there is a need for an improved system of quality monitoring. This was reinforced in meetings with students. Students identified a number of issues that concerned them including the subjective assessment (favouritism) of students by some professors; an anxiety that questionnaires were not kept confidential and that honest comments might result in victimisation (although there was greater confidence now that the questionnaires were being handled centrally); students being recruited with low grades who subsequently were disruptive in lectures/seminars (some students went so far as to suggest a separate entrance examination as a way of counteracting this problem).

The team was interested to hear of a Facebook survey conducted by students in one of the faculties that asked a number of questions as to why students dropped out. The answers were in line with other comments received from students. The four main reasons were (1) good students leaving because of disruption in lectures; (2) the scheduling of lectures from early in the morning to late at night (8am to 8pm); difficulties in relationships between some professors and their students; (4) students entering programmes without the relevant subject background. It was also acknowledged that some students left for economic reasons.

The team gained the impression that these issues were recurrent ones and that the quality assurance processes were either (a) failing to identify the problems or (b) failing to tackle the issues. However, the team was advised of a major exercise, initiated by the Rectorate, currently taking place in all faculties to evaluate the academic and financial viability of programmes and no doubt these reviews would draw on data and information linked to many of the issues raised above. While, undoubtedly, this was a crucial and timely exercise there was a need for the university to strengthen its core day-to-day operations in quality management. There was, in the view of the team, a strong case for enhancing the role and authority of the Quality Assurance Department and this might be achieved, in part, by linking it directly to the Rectorate. Notwithstanding some of the less than helpful requirements linked to national laws, for example, to staff appointments, deliberative structures and to curriculum development, ultimately the university needed to ensure that its quality assurance processes were able to identify problems and resolve the issues with greater rigor. In this context, the university would benefit from consulting the European Standards and Guidelines for Quality Assurance in the Higher Education Area.

Personal tools