In today's open session we ran an analysis on the application of impact evaluator or block reward type systems for 2 domains: academic publishing & environment We derived 5 useful features in their design 1. All impact evaluator functions require credible conversion into fungibility Hash power for btc, storage for fil, etc. are clear mathematical functions allowing for issuance against some formula But people only buy into the issuance if they accept its neutrality. For eg, carbon credits are fungible but many coal polluters use some slightly better technology and receive credits so it's not entirely credible 2. If properly obtained, impact evaluator systems become knobs by which we can align long term actors around an ideal outcome we want They should also be metrics that are hard to obtain but easy to verify, similar to btc or storage capacity 3. We want to ideally first solve some problem locally like "is this paper enough to be accepted to the conferences" And make those inputs onto more global problems like "is the conference high impact", "how good is a researcher as measured by their publication in good conferences" 4. We want impact evaluators to be self upgrading systems, otherwise they can ossify into bastions of power A good example is the implementation of plurality in community notes or cluster QF. If 2 people normally disagree but now agree, that has a higher weight. But if they again agree next time it has a lower weight since last time they voted together 5. Finally, we have impact evaluators as hard mathematical functions that release some emissions vs more soft & irrational forces like market prices of that currency, which need to be squared against each other
Devansh Mehta
Devansh Mehta29.7.2025
What a great first presentation at the research retreat by one of the participants on control theory He ran a quant firm full of mathematicians, so he needed to exactly determine the bonus structure based on profit made by traders It was highly technical so much of it went over my head, but some key points i did get; 1. We should convert global problems (like how much did this person contribute to the company) into local ones (who was responsible for this $100 trade and how much) 2. We separate out estimation or figuring out weights from control or determining payouts based on obtained parameters 3. For control questions, we change from a graph structure into a matrix, making the whole distribution problem more tractable Much of what we discussed was highly relevant to deep funding. My 2 keys takeaways were - If parts of the matrix are unfilled, can we use distilled human judgment to still estimate their answers? - if deep funding is less of a tree structure and more of a directed acyclic graph, then can recommendation algorithms be applied to getting weights between repos?
15,08K