Investigate server option unique_checks
I just read this in that variable name "If unique_checks is disabled when the primary key is not unique, secondary indexes may become corrupted. In this case, the indexes should be dropped and rebuilt."
I do not like this, a user should never be given an option to corrupt their data, which is why I deprecated the dangerous mode of tokudb_pk_insert_mode. I also now see the difference between this feature and tokudb_pk_insert_mode, they are different cases to provide a similar optimization. If the conditions could be validated at run time that this is a safe optimization, then the option should not be needed and the optimization applied automatically.
OK, so there is code in TokuDB that uses this system variable and it does not seem to have changed for a very long time. It intersects in a few places with RFR so it is possible that implementation of RFR broke this behavior. Now that I am aware of this option and optimization, I am concerned that it exists. It allows a user to get themselves into a state where their primary and secondary indices are out of sync, which is exactly one of the 'data corruption/consistency' issues that we hear occasionally from the field and have no answer for.
IMHO, unless a predicate can be contrived within the code that 100% guarantees consistency (like test/guess if the PK is unique before deciding to skip the lookup) then this TokuDB implementation should be removed or augmented somehow to inform users that they might be shooting themselves in the foot. As for whether or not it works currently, that I can not answer.
This is similar functionality as NOAR