Hardware architecture improvements had relied on Moore's law and Dennard scaling for some decades. According to Moore's prediction, the number of transistors on a chip doubles almost every 18 months. According to Dennard scaling, as transistors become smaller, their power density stays constant. These allowed manufacturers to raise clock frequencies without significantly increasing overall circuit power consumption. Nowadays, Moore’s law is not happening as predicted, and Dennard scaling law is broken down. To address this, (homogeneous) multi-processors were invented, and opened a new era for parallel processing to gain better performance with a reasonable increase in power consumption. However, homogeneous multi-processors cannot keep up with high-demand application requirements such as low power, energy efficiency, and high performance.
Deze artikelen lazen jullie het afgelopen half jaar het liefst op de Vraagbaak Online Onderwijs. Ze gaan onder andere over blended learning, kennisclips, online leren en natuurlijk ChatGPT.
In recent years, responsible artificial intelligence (AI) has been getting more attention. The potential of AI is undeniable, there has been a growing interest in the application of AI within many industries. However, the application of AI comes with its risks. That is why ethical frameworks and guidelines have emerged, aiming to regulate and limit those risks. AlgorithmWatch (2020) created an AI ethics guidelines inventory, in which over 160 guidelines were accumulated. Most of them were published after 2018, showing the increased attention to making AI more responsible.
However, as these AI ethics guidelines evolve, it turns out they are not easy to apply in practice. These guidelines tend to be too abstract and generic to be readily applicable in practice. There is a need for more practical guidelines to better help navigate the complexities of AI implementation. My thesis here at SURF investigates what is necessary to create a standards framework to help use AI more responsibly and offer more practical guidance. This blog post explores the problem of the applicability of these guidelines and how my thesis will aim to help bridge the gap between abstract guidelines and practical application in higher education.
In the ever-evolving realm of scientific research, the advent of Artificial Intelligence (AI) has been like the discovery of a new frontier. Yet, it isn't without its hurdles. This was a central topic during our panel session at SURF Research Day 2023, where we delved into the intricacies of employing machine learning as a research methodology.
Energy is an emerging topic in the scientific computing ecosystem and is becoming a design point for future research. Science relies increasingly on digital research computing as a tool for analysis and experimentation. The exponential increase in demand for computing means that classically designed ICT infrastructure will soon become unsustainable in terms of its energy footprint. We need to experiment with energy-efficient methods, tools, algorithms and hardware technologies. In the Netherlands, we are working towards zero energy waste for high-performance computing (HPC) applications on the national supercomputer “Snellius”. It involves discussing challenges, proposing new research directions, finding opportunities to engage the user community, and taking steps for the responsible use of software in research.
Traditionally, supercomputing focuses on improving latency or throughput, which are of massive importance for applications such as drug discovery or climate simulations. For many decades we developed infrastructure, algorithms, and software tools to obtain improvements. Given the rapid increase in energy usage for ICT services, further emphasised by the imminent energy crisis, it is a priority to understand and optimise the energy consumption of research computing applications
To know more, read the publication :
https://ercim-news.ercim.eu/en131/special/making-scientific-research-on…
Artificial Intelligence (AI) has revolutionized the research landscape, offering new methodologies and tools that have the potential to greatly enhance scientific discoveries. However, the adoption of AI in research also brings its own set of challenges, requiring new skills and a deep understanding of responsible AI practices. We need your insights to help us navigate this exciting yet complex field. Help us by filling in our 10-minute survey on responsible AI in research!
Heb jij ideeën over de toekomst van de campus in het mbo, hbo en wo? En wil jij hier graag over meedenken en praten? Dan hebben wij goed nieuws! Het SURF Future Campus project organiseert in mei en juni vier regiobijeenkomsten door heel Nederland, waarin we samen werken aan het creëren van diverse toekomstscenario's.
Ever since the release of ChatGPT, people have been amazed and have been using it to help them with all sorts of tasks, such as content creation. However, the model has faced criticism: some are raising concerns about plagiarism for example. AI-generated content detectors claim to distinguish between text that was written by a human and text that was written by an AI. How well do these tools really work? According to our findings they are no better than random classifiers when tested on AI-generated content.
There are more concerns than just the performance of these tools however. For one, there is no guarantee of avoiding false positives. Wrongfully accusing someone of plagiarism would be especially harmful. Then, it seems likely that this will turn into a game of cat and mouse with language models and tools promising to detect them continually trying to outdo each other. All in all, detection tools do not seem to offer a very robust or long-term solution. Perhaps it would be better to include the impact of artificial intelligence in the existing discussion about the best way to design exams and assignments to test students.
The future is not a destination –
it’s [about] practicing possible futures.
It’s about rehearsing different strategic options.
There is no shortage of talk about the future at SURF. The Copenhagen Institute for future studies was welcomed to SURF on February 28th and 29th to introduce applied strategic forecasting to a variety of SURF participants. The scenarios, skills and ideas built in the training would create new ways of thinking about the future as a tool and offer models to use that tool. The trainers, Simon Fuglsang Østergaard and Sofie Hvitved were there to guide the process.