官方网站:https://www.humane-intelligence.org
该组织由拉姆曼-乔杜里(Rumman Chowdhury)博士领导,正在为模型评估师和寻求更多模型评估知识的个人搭建一个编程平台。通过创建这个社区和实践空间,我们的目标是使算法审核和评估实践专业化。Humane-intelligence.org是一个供组织和个人结盟、创建社区、分享最佳实践的平台,是创建技术评估的一站式商店,有助于推动基准、标准等的制定。我们积极致力于开发实践性强、可衡量的方法,对人工智能模型的社会影响进行实时评估。
人道智能起初是一个主持算法偏见悬赏的小型组织。这些项目一般在网上举行,是提高人们对算法偏见悬赏的兴趣和认识的一个很好的起点。2023 年,我们将活动扩展到主办面对面的红队活动,包括在 DEFCON 31 上举办的有史以来规模最大的生成式人工智能红队演习、与英国皇家学会合作举办的科学误导和信息失真红队演习,以及在欧特克大学举办的建筑红队活动。
Humane Intelligence is a tech nonprofit building a community of practice around algorithmic evaluations.
The organization, led by Dr. Rumman Chowdhury, is building a programming platform for model evaluators and individuals seeking to learn more about model evaluations. By creating this community and practice space, we aim to professionalize the practice of algorithmic auditing and evaluations. Humane-intelligence.org is a platform for organizations and individuals to align, create community, share best practices, and have a one-stop shop for creating technical evaluations that help drive benchmarks, standards, and more. We are actively engaged in the development of hands-on, measurable methods of real-time assessments of societal impact of AI models.
Humane Intelligence started as a small organization hosting algorithmic bias bounties. These programs, generally held online, were a great starting point to raise interest and awareness in algorithmic bias bounties. In 2023, we expanded to hosting in-person red teaming events, including the largest ever generative AI red teaming exercise at DEFCON 31, a scientific mis-and dis-information red teaming exercise with the Royal Society, and an architecture red teaming event at Autodesk University.
编辑:一一