Improving Student Computer Access

Students using new computer labOne of the limiting factors in the success of the elearning training is the availability of open access computer labs for students. Most computer labs, although using relatively recent hardware and operating systems, have few PCs fully functional. Many are crippled due to viruses or broken hardware or many aren’t connected to the network. Most labs have restricted opening hours or no open access at all and are only used as teaching labs.

A network-centric model

Key Facts

  • 2 new computer labs
  • over 80 open access terminals
  • further 70 terminals available soon

To help ensure the success of the elearning programme, Digital Campus set about thinking how computer terminal access for students could be vastly improved and how a cost effective, maintainable and hyper scalable computer architecture could be delivered. For us, the most obvious solution was to use thin client technology with open source software, this has a number of distinct advantages over traditional PC labs:

  • Centralised IT management – with little or no software on the client terminals, the computing capacity and management is focussed within shared, robust, redundant servers in the MU data centres, where all the software configuration and data is stored and maintained
  • More efficient and cost effective use of hardware – with horizontal scaling, old servers can be built into the server cluster to increase capacity. PCs past their useful life for running the latest operating systems can be repurposed to become network booting thin client terminals.
  • Identity management – users have one username and password giving access to the same desktop environment, applications and data which ever terminal they use to log into, even if the terminal is on another campus. So no more sharing of PC admin accounts, or moving files around using flash disks, CDs etc. Centralised user data storage also enables reliable backups to be taken.
  • No software license lock-in – using only open source software, avoids licences costs. Open source software is also far less susceptible to viruses.

This network-centric design isn’t without drawbacks, although a smaller IT support team is required, they need a higher skillset to ensure effective server and network management. Whereas in traditional PC lab environments, the availability and maintenance tasks of desktop PCs is distributed (and often times falls into the hands of the end-user), with network-centric architectures, the lab thin-client functionality depends entirely on the network and server availability maintained by the university ICT staff.

Assessing thin client technologies

The Digital Campus pilot, in designing a network-centric architecture for Mekelle University ,explored several varieties of thin client technologies: Sun Ray ultra-thin clients (mostly used, some devices up to 10 years old), PXE bootable PC’s with a linux kernel for remote desktop access, Sun Ray access client software on recycled PC’s, and LTSP linux stations with recycled PCs. We’re still in the process of assessing which is the most appropriate as a long-term solution.
Installing a new networkAlthough some new hardware was required (mainly the servers) and donated by the project, the majority of the budget is devoted staff training and capacity building with tutors and ICT staff, with the expectation being that existing hardware is reused and the university factors in the cost of new/replacement hardware into it’s existing budgets.

In November 2009, we opened two new computer labs, one for the Technology Institute (39 terminals) at the main university campus and another at the Health Sciences colleges campus (42 terminals). These are supported by 2 OpenSolaris servers, one on each campus to provide failover capabilities. our experiences so far show that these labs have far higher availability than any of the traditional PC labs.

Next steps

The Technology Institute is now in the process of creating a new thin client computer lab with 70 terminals reusing old PC hardware, Digital Campus is supporting the implementation of this, providing training, advice and support. We are also continuing to support the research and development of the appropriate server software and configuration needed to create a hyper-scalable network-centric architecture for the university.

Data Collection using Smart Phones

InterviewGathering data from remote locations can be very time consuming, using paper based collection methods may mean it takes a long time for the data to reach those who are going to use or analyse it. Collecting accurate medical and epidemiological data is essential to get a true picture of the health of the inhabitants for resource planning and research. With the growth in availability and decline in cost of smartphones, these seem an ideal technology to help to improve the accuracy of data collected and vastly reduce the time taken for the data to reach those who need it.

Interviewing rural health workers

Key Facts

  • 64 rural health workers interviewed
  • Symbian and Android smartphones tested
  • better than expected GPRS coverage

Digital Campus is testing a variety of technologies for the collection of public health data to help determine which may be most suitable for application in the field. Working with PhD students in maternal health care an initial study to discover the possible challenges and options for using mobile technologies has already been completed. Over 39 health posts in 3 districts of Tigray (northern Ethiopian) were visited to interview the Health Extension Workers (HEWs) based there.

They were interviewed to try to determine their knowledge, training and experience in maternal health with a view to identifying any gaps in their knowledge and designing a programme to help fill these. Additionally, and crucially if a technology based solution is to be implemented, information regarding their technology experience and the infrastructure facilities (such as mobile reception and electricity access) was collected.

Fast and accurate data dissemination

All the data collected during the interviews was recorded on a smartphone using a questionnaire created using the episurveyor web application (a propietary but nice platform so far). Using the client software (installed on the actual phone), responses to the survey are recorded on the phone, as soon as an area with GPRS signal is reached, the data is automatically uploaded to the EpiSurveyor server, making it available to anyone (with the right permissions) around the world. Each record uploaded is automatically time and location (with GPS) stamped, making it possible it pinpoint exactly where and when the interview took place.

Rural health postOne of our initial concerns was that the GPRS coverage would be limited, perhaps only to large towns/cities, but in fact almost all the Health Posts had good GPRS coverage, even those where the villages didn’t have reliable electricity provision. Only a handful of very remote posts had no mobile coverage at all.

We used Symbian and Android based smartphones to collect the data, but in fact the EpiSurveyor client software will run on almost any java enabled phone. We are also currently setting up a more sophisticated platform based on the Sana community-based development MIT project, with the intention of developing it as a more powerful open-source mobile health platform , that will allow us not only to gather epidemiological data, but to use it for electronic clinical records, telediagnostic applications, decision support systems and posgraduate training

We are in the process of writing a detailed technical paper, describing in more detail the survey results and technology used, along with the main challenges and possible solutions to using smartphone technologies in this environment.

Next steps

Over the coming months we will provide a small group of Health Extension Workers with smartphones to determine what (if any) issues and challenges they face in using these and also to gain a clearer picture of how this technology could be used to improve their skills, training, data collection and reporting methods. Once the results from this pre-implementation phase are known, we will be in a position to design a larger scale programme.

Open Educational Resources

What is OER?

As I mentioned in previous posts, OER is defined as digitized educational materials offered freely and openly for use and re-use in teaching, learning and research.

OER challenges

Using OER in education presents some important challenges:

  • Globalization of the knowledge societies, related to the rise of knowledge-intensive societies and the demand for skilled population.
  • Challenges to education systems: extend reach of education, improve quality and flexibility.

Could the technology help?

The developments of new technical solutions, the increasing connectivity, the growing numbers of low-cost devices and the rising of open digital content create the infrastructure to facilitate knowledge sharing in a global context with social, economic and cultural differences.

OER: an academic and cultural challenge

The explosive increase of OER reinforces the tradition of the altruist academic sharing. Initiatives like MIT OpenCourseWare, the UNESCO actions, the OpenCourseWare Consortium movement, and others initiatives opened a wide debate about OER implications, about their advantages and disadvantages in educational systems in different parts of the world.

OER and the cultural context

OER content that is useful in a cultural, academic and economic development is not always applicable in a different context.
When a university is considering the use of open content in the development of their subjects, must carefully consider the relevance and implications of implementing OER and imagine the prospects for participation in the OER movement as institution.
The participation of the universities in this movement implies to go beyond its role like users of open contents to an active role to share the didactic materials developed by its teachers. This step raises important challenges; to develop pedagogical and technical skills between the teachers to develop to interactive contents and methods of quality control.

Interesting views

Catherine Ngugi’s interview, project director of OER Africa, offers interesting views:

  • Some resources created elsewhere, might not be culturally appropriate, or relevant to the other place.
  • Connectivity is one of the practical challenges.
  • An important point is availability of free licences, with permission for free use.
  • The support from university management is crucial.
  • The notion of open learning is an incentive for academics, but they need to know how to do it.
  • The partnership with other universities is a key issue.

For more detail, read the Catherine Ngugi’s interview (http://www.universityworldnews.com/article.php?story=2010121021305756)

Video Content Management and Streaming with Kaltura and Moodle

Through the elearning training we are trying to encourage teachers to make more use of video and other multimedia content in their courses. This presents us with several issues, mainly because most video streaming sites are blocked by the University (to save bandwidth). This means we either don’t include the videos or we download to run them locally. So far we’ve just been uploading them into the Moodle course, which is fine for relatively low numbers of videos (or for very short videos), but is soon going to become unsustainable. Also, we’d like to suggest video content teacher may wish to use – so it wouldn’t be appropriate to have these filling up the Moodle server.

One solution is to use a multimedia management streaming server, so over the last few days I’ve been testing out Kaltura. It’s an open source video content platform and has plugins for Moodle, WordPress amongst others.

Installation was straightforward enough on my laptop, once I’d got the necessary prerequisite packages installed and settings. Couple of issues I did come across:

1) On my first attempt at installation, it installed on the root of my webserver, so I was unable to access my other web applications. This was because I specified ‘localhost’ as the domain. I tried to figure out how to move to a subdirectory (see: http://www.kaltura.org/moving-installation-new-directory) but haven’t got that one figured out yet. So I just set up a new host (http://kaltura.localhost) and used this instead. So now I can access Kaltura and my original webapps, with out switching configurations and restarting apache.

2) When the prerequisites say that you need a mail server, it really does mean that you need one! After installation, when creating publisher accounts, the login details are emailed only – so there’s no way to set the password except by following the link in the email. I assumed I’d be able to reset the passwords manually and so the mail server integration wouldn’t matter to much. Given that this is just running on my laptop, I haven’t got a mail server running, so then had to set about trying to get one configured. Fortunately I found these instructions on how to configure postfix to relay through a gmail account on Ubuntu (I’m running 10.10). I set up a clean/default postfix installation and used the settings/instructions posted in the comments by Michael M. I used a ‘disposable’ gmail account, so that if something goes wrong, I won’t get blocked from my normal gmail account, but seems to be working well so far. It’s also good now that I can have emails sent for all the webapps on my machine.

So after I had these 2 issues resolved, I was ready to start having a play. All seems to be working well, although I was hoping that people would be able to browse the uploaded content without having first logged in. I guess we’d just need to create a generic account. If anyone knows how to set this up then please let me know – or if there is a generic Kaltura content browser application that I could use?

I tried uploading a few flv and mp4 videos to embed onto a webpage, and seem to work well. A little slow on my machine, but then my netbook probably isn’t designed to be a media processing and streaming server!

My final experiment was to look at the Moodle plugin, unfortunately I had a few more issues with getting this working. When trying to register the module in Moodle, I kept getting the error that ‘Your Kaltura registration failed. Missing KS. Session not established’ when trying to enter the url, username and password for my Kaltura server. After a bit of investigation I found it was a bug with how the partnerId was(n’t) being passed. I found a hack around this, see: http://www.kaltura.org/config-moodle-mod-moodleadmin-page, but it’s not pretty!

Now I have the option to add a video resource in Moodle directly from my Kaltura server, or so I thought I had, currently whatever I seem to search for (tags, video titles, categories which I know exist in the account I have) returns no results. Next step is to try and figure out why I can’t seem to find any of the videos I have uploaded…

Testing alternative thin-client server solutions

The thin client solution we currently have running in Mekelle is based on using OpenSolaris and we have a variety of terminals – a mixture of SunRay 1′s, SunRay 2′s and Nortech clients. Using sun ray session server, the sunray terminals are performing well, but when we have the labs full of students, the Nortech terminals are significantly less responsive. There are a number of possible reasons for this, the protocols used, the network amongst others. There is a huge range of other configurations and technologies we could use to provide a robust and scalable thin client architecture.

I’ve spent a few days this week in Barcelona with Cast-Info investigating their Desktop4All solution, which we’re looking to trial as an alternative to the OpenSolaris setup we currently have. Goitom, one of the phds students from Mekelle will spend the next few weeks based in the Cast-Info offices, learning how to install and set up the server system used for Desktop4All, with a view to installing this when back in Mekelle in a couple of months.

Desktop4All, based on Linux, is a set of integrated open source applications. It’s likely to produce a similar end result to the solution that we already have running with OpenSolaris, but the main advantage for us will be in the support and documentation available as a reference. Testing out Desktop4All will give us the opportunity to collaborate in the development and to investigate whether we get similar types of issues arising as we have had with OpenSolaris.

When we started the Digital Campus project, I think there was some concern over whether the students would need much training in how to use a non-Windows operating system, given that much (all?) of their previous experience of using computers/pcs was with Windows (usually XP). This has turned out not to be the case, given that many students have had limited time to become locked in Windows, we’ve found few issues with students being unable to navigate the interface or use applications. I suspect we don’t always give the students credit for their ability to adapt to new interfaces and systems (especially judging by how quickly they find their way to webmail, youtube and facebook).

Student Inductions

I spent my final week in Mekelle helping to run student induction sessions for the Health Sciences college. We now have around 600 students registered on elearning courses (from both Technology Institute and Health Sciences College), with over a third of these having completed our initial student survey – so we should be able to get some good information about their expectations and previous computing experiences.

As always, my last few days in Mekelle were very hectic – my workload seems to increase as I get nearer to my departure date! But we have now got over 20 classrooms in the Health Sciences College connected up to the network, with projectors and computers, so teachers no longer have to carry their laptops to be able to give a presentation, plus they have access to the internet within the classroom. Currently these computers are running on Windows, but we’ll change this so they boot across the network and act as thin client machines.

I was also helping to advise the Technology institute on how they can massively increase their computing infrastructure using the thin client model. They have many 100′s of old monitors to make use of. There is a long way to go to get this set up, especially as the institute needs to staff and train an ICT team/department.

We still have some issue regarding the fact that the labs we have aren’t able to cope with the number of students wanting to use them. I’m getting a lot of requests to allocate specific times for classes, but I’m being quite firm that the labs should remain open access, rather than becoming a substitute for the lack of maintenance in the departmental computer labs.

Am now trying to have a bit of time off in the UK (without getting bogged down in emails about the labs, training etc!), before heading to Spain to work at Alcala Uni for a few months.

Posts navigation

1 2 3 14 15 16 17 18
Scroll to top