If the current job is the latest one in the loader pool and the job is deleted,
the job goes into an infinite cycle of load-fail-delete-reload-load-fail-delete.
So we need to prevent the infinite loop during XAI data loading failed.
1. delete the unused code, catch unknown error is unused in explain_parser.py
2. fix bug the lastest file size is unused
3. rename some variables and fix some spelling error
bugfix and add pillow to requirements.txt
modify summary format
bugfix
use sample_id in summary
fix CI problem
url encode '/' as well
fix ut
fix ut
fix ut
fix uncertainty enable checking
fix review comment
enhance exception raising
enhance comment
fix UT
del unused arguments
fix UT
explainer_api UT change to pytest style
fix UT url
explainer encapsulator UT change to pytest style
remove unnecessary deepcopy
remove import copy
fix some coding style issues
json return null if ExplainManager returns None
use fromtimestamp
use fromtimestamp
wrong name
wrong json key
use offset as page offset
enhance comments
rm mock_explainer_api.py
update copyright year 2020
remove unused import
modification for parser
modification for parser with containers
fix bugs for parser
fix value vacancy for parser
fix typos for parser
adaptation for proto modification
rearrange image data from events
fix small typo
add comments and some error message
delete example
modify init setting of parser
modify some structure of parser
fix pylint warnings
modify parser for parsing one event every time
modify for review
add some comments
modify proto file
modify for open source comments