A recent study conducted by Stanford University proposes an index to measure transparency in Large Language Models (LLMs) and other foundation models. The results of the study suggest that the level of transparency in these models is not encouraging. The study raises concerns about the unknown capabilities of LLMs and aims to raise public awareness of their potential impact. For more information, read the full blog on Medium.

source update: How transparent are large language models? – Towards AI

Comments

There are no comments yet.

Leave a comment