abitmore commented on this pull request.
> @@ -250,7 +253,7 @@ block_production_condition::block_production_condition_enum witness_plugin::mayb
fc::time_point_sec scheduled_time = db.get_slot_time( slot );
— graphene::chain::public_key_type scheduled_key = scheduled_witness( db ).signing_key;
+ graphene::chain::public_key_type scheduled_key = *db.find_witness_key_from_cache( scheduled_witness ); // should be valid
I agree that ideally it’s better to put the logic in witness plugin because block generation is actually witness plugin’s business.
> Blocks are only popped when switching forks, which means there will be block pushes immediately after the pop, which means the cache will automatically be updated in that case.
One scenario to consider: if put the cache in witness plugin, if we code the plugin to track applied operations only, after a block is popped out, the cache may become inconsistent if the popped out block contains a `witness_update_operation`, when pushing a new block, if there is no the same `witness_update_operation` in the new block, the cache would still be inconsistent. To solve this, we need additional code to detect if a chain reorganization has occurred and refresh the cache accordingly, if not simply refresh the whole cache on every block. However, when refreshing the cache, we need to fetch data from main chain state, since signalling is a multi-threading thing, the main state could have changed when fetching data, which means the cache is not 100% reliable.
By putting the cache in database class we can guarantee its consistency. Perhaps there will still be edge cases, for example, witness plugin tries to generate a block in the middle when database is pushing a block.
Another thing is we don’t have unit test for witness plugin, so adding more code there means more work / risks.