Microsoft Project Oxford and VOIS

Microsoft have brought out a new face and emotion recognition library called Project Oxford that performs most of the functions of the C# library I was developing for VOIS, therefore I’ll be re-engineering my demonstration to take account of this new library.

microsofts-project-oxford-can-now-detect-emotions-in-photos

At first glance though, the Microsoft library performs much better than my own, especially with poorer quality images. Having said that, I don’t know what the pricing structure for the library is.

If it is expensive, then I may revert back to my own library and eventually release it on SourceForge.

Larger University Design Project – Flipped Classroom Design

As part of my university course, my larger design project looked at the conversion of an existing learning program within my company, to utlise the flipped classroom design.

The project rationale can be viewed by clicking the link below:

LargerProject_FlippedClassroomLearningProgram_DesignRationale

Alternatively, one can view this video presentation (password 12345).

 

 

 

University mini projects – understanding e-learning

As part of my university studies I’ve completed a series of papers on the following areas:

  • Conferencing
  • The future of VLE’s
  • Gamification in education
  • Assessment of learning in a digital age
  • Flipped Classroom
  • Mobile Learning

I’ve collated these into one larger document  you can view by clicking the link below:

MiniProjects_Darren_Bellenger_December2015

Going back to school!

I have just started the MSc Technology Enhanced Learning at Huddersfield University and will be writing my thoughts and feelings whilst on the course in a new section called “UNI” imaginatively!

The course is 3 years long and I’m hoping to use the Universities facilities to push along my work on VOIS as well as expand my knowledge which until now has been from a self-taught perspective.

VOIS Development Part 3 – Procedure for recording emotions

It’s been a long time since I’ve previously posted about VOIS. I have a new job and this has taken up most of my time.

At the end of September I’m undertaking a Masters at Huddersfield University so I’m definitely hoping the technologies behind VOIS could be taken forward with the help of the University’s facilities.

So given everything above my work on VOIS has been limited, so I’ve been thinking more about when to use audio-based emotion recognition and when to use video-based emotion recognition. The diagram below is my current thoughts which I’ll be testing later on in the year when I have time.

Emotion process

VOIS Development Part 2 – Choosing an Audio-based Emotion Recogniser

Been pretty busy since my last post and have:

  • Improved the EigenFace based facial emotion recognition by continually streamlining the facial dataset I use. This has resulted in improved results, although in the future, I think I will still need to move to a FACS (Facial Action Coding System) variant.
  • Decided to shelve my plans for using Xamarin for now, due to the increased workload it puts on me, not to mention the fact I’ll need to upgrade my old macbook which now is too old for the latest Xcode build.
  • Created a windows phone variant of the original tablet app I created, running on my new Nokia Lumia 630.

I’m now moving onto coming up with a plan for how I will integrate audio-based emotion recognition into VOIS. Having reviewed a number of interesting papers I’m going to implement the approach outlined in the paper “Perceptual cues in nonverbal vocal expressions of emotion”, UK published in 2010.

audio1

The paper outlines measurements in acoustic cues to define emotion.

I’ll provide another update on how successful I’ve been implementing the above approach and whether I can marry up the visual emotion feed I currently have, with a new audio-based one. At the moment I can’t see anyone that has done this successfully so it will certainly be a real triumph if it’s a success.

Bye for now..

Microsoft Build 2015 – Hololens for Education

It’s great that Microsoft are pushing education as a major stream for Hololens, I would though have probably hoped for a slightly better Build 2015 demo as what was demonstrated, something with location context sensitivity.

But still, good times ahead and I’m looking forward to the day I can try out VOIS with Hololens!

VOIS Development Part I – Facial and Emotion Recognition

Initially my VOIS development involves creating a series of prototypes that demonstrate each part of the complex functionality required, such as: facial recognition, video/audio emotion recognition, speech recognition etc.

First to tackle has been developing a mobile app for facial and emotion recognition.

VOIS1

Detecting Andy and his facial expressions

The app (shown in the screenshot above) was developed in C# and uses the concept of EigenFaces to initially detect someone the user knows (from a library they must accrue on their mobile device) then as the person talks the app will detect the emotion in their facial expressions.

Currently the app runs as a windows tablet app with my next task being to re-develop it in xamarin, so that it can be published to Android or iOS devices.

Bye for now.

An introduction to VOIS (The Visual Ontological Imitation System)

Maybe there are people who just don’t get where you’re coming from. This is an everyday reality for people with autism, but it is not just disabled people who need help with understanding emotion in others.

VOIS is an innovative design for an application (the brainchild of a friend of mine – Jonathan Bishop), which will assist autistic people in recognising the facial moods of people they are talking to and suggest appropriate responses.

Given that VOIS will work irrespective of what language is being spoken, there are obviously cross-over opportunities to use it in areas such as:

  • Defence
    Soldiers who have regular contact with, say, a tribal elder, could use it to see whether the elder is being evasive as well as how well his mood changes over time.
  • Security
    During interrogation of suspected terrorists, along with standard questioning, VOIS could pick-up evasiveness and suggest more questions in certain areas.
  • Immigration
    Again VOIS could help in questioning asylum seekers here too.

Future versions of VOIS could be used via a head camera or fixed camera for surveillance roles. Jonathan is himself autistic.

I’ve agreed to help Jonathan with the development of a prototype version that will run on a range of mobile devices and intend on charting my progress via my blog.

VOIS screen

 

CLOUDmeet – a collaborative virtual world for business and education

Haven’t blogged for an awful long time, so thought I’d put a couple of updates on here with regards to what I’ve been up to.

One of my current pet projects is trying to develop a HTML5/java based collaborative virtual world which would be similar to Vastpark and Protosphere, but would run natively on any modern device that supports HTML5.

It’s going to take me some time to develop, but I have already done some concept screens:

Login_smallMain_smallMainCalendar

Avatar_small

VW

I’ll keep posting my progress on the development as it goes along.