User: "is there a reason why redis server logs are missing the year in
the "date time"?"
Me: "I guess I did not imagine it would be stable enough to run for
several years".
Related to #5157. The PR author correctly indentified that the check was
duplicated, but removing the second one introduces a bug that was fixed
in the past (hence the duplication). Instead we can remove the first
instance of the check without issues.
Implementation notes: as INFO is "already broken", I didn't want to break it further. Instead of computing the server.lua_script dict size on every call, I'm keeping a running sum of the body's length and dict overheads.
This implementation is naive as it **does not** take into consideration dict rehashing, but that inaccuracy pays off in speed ;)
Demo time:
```bash
$ redis-cli info memory | grep "script"
used_memory_scripts:96
used_memory_scripts_human:96B
number_of_cached_scripts:0
$ redis-cli eval "" 0 ; redis-cli info memory | grep "script"
(nil)
used_memory_scripts:120
used_memory_scripts_human:120B
number_of_cached_scripts:1
$ redis-cli script flush ; redis-cli info memory | grep "script"
OK
used_memory_scripts:96
used_memory_scripts_human:96B
number_of_cached_scripts:0
$ redis-cli eval "return('Hello, Script Cache :)')" 0 ; redis-cli info memory | grep "script"
"Hello, Script Cache :)"
used_memory_scripts:152
used_memory_scripts_human:152B
number_of_cached_scripts:1
$ redis-cli eval "return redis.sha1hex(\"return('Hello, Script Cache :)')\")" 0 ; redis-cli info memory | grep "script"
"1be72729d43da5114929c1260a749073732dc822"
used_memory_scripts:232
used_memory_scripts_human:232B
number_of_cached_scripts:2
✔ 19:03:54 redis [lua_scripts-in-info-memory L ✚…⚑] $ redis-cli evalsha 1be72729d43da5114929c1260a749073732dc822 0
"Hello, Script Cache :)"
```
on slower machines, the active defrag test tended to fail.
although the fragmentation ratio was below the treshold, the defragger was
still in the middle of a scan cycle.
this commit changes:
- the defragger uses the current fragmentation state, rather than the cache one
that is updated by server cron every 100ms. this actually fixes a bug of
starting one excess scan cycle
- the test lets the defragger use more CPU cycles, in hope that the defrag
will be faster, but also give it more time before we give up.