27 August 2023
10 mins read

Final Report - GSoC 2023

The end

Shashwat
Shashwat TheTrio

It feels like just yesterday that I was writing my proposal for this year’s GSoC. I was excited and looking forward to not only working on the codebase again but also interacting with users of the dashboard, something I had never done before. While I always expected to get accepted into the program, I was vastly underestimating how invigorating(and taxing!) user research can get.

And yet here I am, 3 months later. I haven’t written a lot of code to be fair but I’ve come to realize that - at least this year - the code wasn’t the most important thing. And so unlike last year’s report, you’ll find that this report is more about the experience, the things I learned, and the people I talked to rather than about the code I wrote.

So without further ado, let’s get started!

Project Details

Overview

As always, for those new, here’s a quick overview of the dashboard and what my project aimed to do. I’ve taken this from my proposal so it is a bit outdated but I think it still does a good job of explaining the project.

Wiki Education Dashboard is a complex web app for keeping track of contributions to Wikimedia projects. It’s widely used by the global Wikimedia community for edit-a-thons, classroom wiki writing assignments, and a variety of other initiatives.

The dashboard currently supports a variety of methods to track articles within the “Article scoped program” event type - namely categories, PetScan, PagePile, and templates. In a nutshell, all of these scoping methods make it easier to group and track a set of articles with a single condition, greatly reducing the effort for tracking related or similar articles. However, currently, these tracking options are not obvious to find. This project aims to remedy that by incorporating these features into the course creation modal window itself - along with adding new features(like auto-complete, presets, etc) - to make discovering and editing these values easier

What did I do?

To make things a bit simpler to understand, I think it would be best to divide my work into 2 major categories - user research and code contributions. As I mentioned above, I didn’t write a lot of code this summer but I did do a lot of user research. So let’s start with that.

User Research

What is user research? Well, it’s exactly what it sounds like - research about the users. In this case, the users of the dashboard. I had never done user research before so I was quite excited to get started. I had a lot of help from my mentor Sage who sent an email to the dashboard mailing list asking for volunteers to talk to me. I also reached out to people on my own. I got some responses and while scheduling the interviews took a bit of time - I had to work around timezones and other commitments - I was able to talk to a few people and get some valuable insights.

Going over all the interviews in detail would take a lot of time and likely wouldn’t be very interesting to read so I’ll just limit myself to the major takeaways from the interviews.

Major Takeaways

Since the entire project revolved around Article Scoped Programs, I wanted to know how people were using them and what they thought about them. I wasn’t surprised to find that most people weren’t even aware of the feature - something my project aims to remedy. Even those who were using Article Scoped Programs weren’t using them to their full potential. Most people were using the category option and a few were using the PetScan option.

In one of my interviews, I talked to a user who only just discovered the features of Article Scoped Programs and was quite excited to use them. She found out about it from a video tutorial on YouTube. This was quite interesting to me - while I did know that the scoping methods were a bit hidden away, I didn’t realize how difficult to find they were - not to mention that the lack of a guided setup meant that users could easily miss out on these features.

This made the work on adding scoping methods to the course creation modal even more important. If you put everything front and center, people are more likely to use it. And if you guide them through the process, they’re more likely to use it correctly. This is where the auto-complete, error handling, and other nice-to-have features come in. It’s very easy to make a mistake when adding a category or a template - especially when you’re working across multiple wikis.

The design was also important. I had a few ideas even before I got selected but none of them were ideal - they involved compromising between functionality and simplicity and I wasn’t happy with that. After discussing with Sage and demoing a few prototypes to users, I was able to come up with a design that worked better for most people.

Other Feature Requests

There are some things that I knew I would have to do even before the interviews - the previous section is all about them. I wasn’t sure about the specifics - something the interviews helped with. For example, I knew that I would have to add the scoping methods to the course creation modal. But there were a few feature requests that I didn’t think about before I talked with folks using the dashboard. Let’s go over a few of them.

One was adding a way to find which articles were being tracked by a particular scoping method. This comes in handy when you are sure that you’re tracking an article but you still can’t see its edits on the dashboard. The idea was to add a “View Articles” button to each scoping method that would open up a modal listing all the articles the Dashboard associated with that scoping method. Add in a search bar and some hyperlinks and you have a fairly functional interface for finding tracked articles. A related feature request was a way to somehow summarize a PetScan ID with the things it was tracking - the logic being that it’s easy to think in terms of categories and templates but not so much in terms of PetScan IDs. While I did plan on working on this, I feel like the “View Articles” button is a better solution - not only because it’s easier to implement but also because it’s more flexible and works for all scoping methods.

Another feature request was a way to track multiple categories across wikis and with different sub-category depths at once. This would significantly reduce the amount of time spent on configuring scoping methods. This did take a bit more effort than I initially anticipated, but I was able to get it working in the end.

Of course, I couldn’t get everything done. One of them was to add a way to track an article - say about Albert Einstein - but across wikis of different languages. This is to say that tracking Albert Einstein on the English Wikipedia would also track Albert Einstein on the German Wikipedia without having to manually add de:wiki Albert_Einstein. I think this is a great idea but unfortunately, I couldn’t figure out a good way to implement it. I imagine it would require the use of WikiData to figure out the various articles on a topic/person across languages or wikis but unfortunately, I never got the time to work on it. I do hope to work on it in the future though.

Code Contributions

If you’re here for the code, this is for you. Here are all the pull requests I made this summer:

In a nutshell, my primary focus was to ensure that everything I did - from the scoping methods in the course creation modal to the scoping methods in the course edit modal - was modular. This is the reason that it took me a while to get this done. Building new components for each scoping method in the course creation modal and then another set for when you edit the course would have been much easier but it would have been a nightmare to maintain. Finding the right balance between abstraction and readability was a challenge but I think I did a good job. The result of going down this route is that any feature added to the course creation modal is automatically available in the course edit modal. This not only means that it’s easier to maintain but also that the user experience is consistent - regardless of whether you’re creating a course or editing one.

Another thing I was rather stubborn about was testing. I wanted to make sure that everything I did was well-tested. I wrote rspec feature tests for the new functionality that I had implemented. These ensured that everything - from the basic stuff like actually tracking an article - to more advanced stuff like auto-completing articles from across wikis - worked as expected.

Conclusion

There’s a lot more that I could talk about but I think this is a good place to stop. I’m really happy with the work that I did over the summer and even more happy with the feedback I got from the community. I got nothing but respect and admiration from everyone I talked to and that helped me stay motivated.

Sage was an amazing mentor and I’m grateful for all the independence he gave me. I can easily imagine Sage contacting people, setting up the interviews, and even attending those interviews with me. But he didn’t. I can’t thank him enough for that - those interviews were a great learning experience and I’m glad I got to do them at my own pace.

I would also like to thank everyone else who reached out to me and helped me with the project. I’m grateful for all the feedback I got and I’m glad I got to work with such an amazing community.

And that’s about it. I still can’t believe how quickly these three months went by. While I can no longer participate in GSoC the next year, I hope to stay involved and continue contributing to the project.

Until next time!

Categories

GSoC GSoC-23