In recent years, responsible artificial intelligence (AI) has been getting more attention. The potential of AI is undeniable, there has been a growing interest in the application of AI within many industries. However, the application of AI comes with its risks. That is why ethical frameworks and guidelines have emerged, aiming to regulate and limit those risks. AlgorithmWatch (2020) created an AI ethics guidelines inventory, in which over 160 guidelines were accumulated. Most of them were published after 2018, showing the increased attention to making AI more responsible.
However, as these AI ethics guidelines evolve, it turns out they are not easy to apply in practice. These guidelines tend to be too abstract and generic to be readily applicable in practice. There is a need for more practical guidelines to better help navigate the complexities of AI implementation. My thesis here at SURF investigates what is necessary to create a standards framework to help use AI more responsibly and offer more practical guidance. This blog post explores the problem of the applicability of these guidelines and how my thesis will aim to help bridge the gap between abstract guidelines and practical application in higher education.
In the ever-evolving realm of scientific research, the advent of Artificial Intelligence (AI) has been like the discovery of a new frontier. Yet, it isn't without its hurdles. This was a central topic during our panel session at SURF Research Day 2023, where we delved into the intricacies of employing machine learning as a research methodology.
Energy is an emerging topic in the scientific computing ecosystem and is becoming a design point for future research. Science relies increasingly on digital research computing as a tool for analysis and experimentation. The exponential increase in demand for computing means that classically designed ICT infrastructure will soon become unsustainable in terms of its energy footprint. We need to experiment with energy-efficient methods, tools, algorithms and hardware technologies. In the Netherlands, we are working towards zero energy waste for high-performance computing (HPC) applications on the national supercomputer “Snellius”. It involves discussing challenges, proposing new research directions, finding opportunities to engage the user community, and taking steps for the responsible use of software in research.
Traditionally, supercomputing focuses on improving latency or throughput, which are of massive importance for applications such as drug discovery or climate simulations. For many decades we developed infrastructure, algorithms, and software tools to obtain improvements. Given the rapid increase in energy usage for ICT services, further emphasised by the imminent energy crisis, it is a priority to understand and optimise the energy consumption of research computing applications
To know more, read the publication :
https://ercim-news.ercim.eu/en131/special/making-scientific-research-on…
Artificial Intelligence (AI) has revolutionized the research landscape, offering new methodologies and tools that have the potential to greatly enhance scientific discoveries. However, the adoption of AI in research also brings its own set of challenges, requiring new skills and a deep understanding of responsible AI practices. We need your insights to help us navigate this exciting yet complex field. Help us by filling in our 10-minute survey on responsible AI in research!
Heb jij ideeën over de toekomst van de campus in het mbo, hbo en wo? En wil jij hier graag over meedenken en praten? Dan hebben wij goed nieuws! Het SURF Future Campus project organiseert in mei en juni vier regiobijeenkomsten door heel Nederland, waarin we samen werken aan het creëren van diverse toekomstscenario's.
Ever since the release of ChatGPT, people have been amazed and have been using it to help them with all sorts of tasks, such as content creation. However, the model has faced criticism: some are raising concerns about plagiarism for example. AI-generated content detectors claim to distinguish between text that was written by a human and text that was written by an AI. How well do these tools really work? According to our findings they are no better than random classifiers when tested on AI-generated content.
There are more concerns than just the performance of these tools however. For one, there is no guarantee of avoiding false positives. Wrongfully accusing someone of plagiarism would be especially harmful. Then, it seems likely that this will turn into a game of cat and mouse with language models and tools promising to detect them continually trying to outdo each other. All in all, detection tools do not seem to offer a very robust or long-term solution. Perhaps it would be better to include the impact of artificial intelligence in the existing discussion about the best way to design exams and assignments to test students.
The future is not a destination –
it’s [about] practicing possible futures.
It’s about rehearsing different strategic options.
There is no shortage of talk about the future at SURF. The Copenhagen Institute for future studies was welcomed to SURF on February 28th and 29th to introduce applied strategic forecasting to a variety of SURF participants. The scenarios, skills and ideas built in the training would create new ways of thinking about the future as a tool and offer models to use that tool. The trainers, Simon Fuglsang Østergaard and Sofie Hvitved were there to guide the process.
Wat is zelfregulerend leren en waarom is het interessant?
In dit korte experiment werken we samen met een groep onderzoekers van de Radboud Universiteit onder leiding van Joep van der Graaf en Inge Molenaar (https://www.ru.nl/bsi/research/group-pages/adaptive-learning-lab-all/design-development-innovative-learning/flora-project/) om de potentie van machine learning op het gebied van zelfregulerend leren, of Self-Regulated Learning (SRL) te onderzoeken. SRL is een proces waarbij studenten strategieën gebruiken om effectief te leren, zoals taakplanning, prestatiebewaking en reflectie op resultaten. Zelfregulerend vermogen omvat het stellen van doelen, zelfcontrole, zelfinstructie en zelfversterking. Met dit project ondersteunt SURF onderzoek naar 'leren leren'. Door mensen te helpen zelfregulerende leervaardigheden te ontwikkelen, kunnen we hen helpen effectievere en efficiëntere leerlingen te worden, wat kan leiden tot betere academische en professionele resultaten.
Tot nu toe wordt het extraheren van leerprocessen meestal gedaan op basis van de theorie van onderzoekers. Door gebruik te maken van machine learning, specifiek ongecontroleerde methodes, creëren we een meer datagedreven aanpak. Ons model leert patronen uit gegevens te detecteren en omdat het geen toezicht heeft, hoeven die gegevens niet te worden gelabeld.
What is self-regulated learning and why do we care?
In this brief experiment, we are teaming up with a group of researchers from Radboud University led by Joep van der Graaf and Inge Molenaar (https://www.ru.nl/bsi/research/group-pages/adaptive-learning-lab-all/design-development-innovative-learning/flora-project/) to investigate the potential of machine learning in the field of self-regulated learning (SRL). SRL is a process where students employ strategies to learn effectively, such as task planning, performance monitoring, and reflection on outcomes. Self-regulation abilities include goal setting, self-monitoring, self-instruction, and self-reinforcement. With this project, SURF is supporting research about “learning to learn”. By helping people develop self-regulated learning skills, we can help them become more effective and efficient learners, which can lead to better academic and professional outcomes.
Until now, extracting learning processes is mostly done based on researchers’ theory. By using machine learning, specifically unsupervised methods, we create a more data-driven approach. Our model learns to detect patterns from data and because it is unsupervised, that data does not need to be labeled.