Decoding AI Governance Feedback Form

Thank you for providing feedback on “Decoding AI Governance: A toolkit for navigating evolving norms, standards, and rules” from Partnership on AI. 
This document is a preliminary version of the tools (AI Governance Stack, AI Governance Map, and Anchors and Hooks Framework) that we intend to refine collaboratively with our Partners and through feedback from our community. 

Please share your thoughts on the following:
To what extent is the AI Governance Stack useful, and how can it be improved?
  1. What layers should be added or collapsed? 
  2. Should the altitude of the layers be adjusted? 
  3. Are the current “functions” of each layer accurate? 
  4. More specifically, how do we account for the fact that a specific layer, in practice, may not meet its ideal goal?
To what extent is the AI Governance Map useful, and how can it be improved?
  1. Are the principles and characteristics for the X axis appropriate? 
  2. Which other frameworks should inform it? 
  3. To what extent are common definitions for X axis concepts critical to set out? 
  4. Should any of the instruments on the map be repositioned or recolored?
To what extent are Anchors and Hooks frameworks useful, and how can they be improved?
  1. In what contexts are common Anchors important? 
  2. In what contexts is flexibility helpful for maintaining coherence as varying governance models evolve? 
  3. What are instruments currently acting or should act as Anchors in the AI governance space?
Please add any additional thoughts here