You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

run.py 20 kB

[to #9061073] feat: merge tts to master Link: https://code.alibaba-inc.com/Ali-MaaS/MaaS-lib/codereview/9061073 Link: https://code.alibaba-inc.com/Ali-MaaS/MaaS-lib/codereview/9061073 * [to #41669377] docs and tools refinement and release 1. add build_doc linter script 2. add sphinx-docs support 3. add development doc and api doc 4. change version to 0.1.0 for the first internal release version Link: https://code.aone.alibaba-inc.com/Ali-MaaS/MaaS-lib/codereview/8775307 * [to #41669377] add pipeline tutorial and fix bugs 1. add pipleine tutorial 2. fix bugs when using pipeline with certain model and preprocessor Link: https://code.alibaba-inc.com/Ali-MaaS/MaaS-lib/codereview/8814301 * refine doc * refine doc * merge remote release/0.1 and fix conflict * Merge branch 'release/0.1' into 'nls/tts' Release/0.1 See merge request !1700968 * [Add] add tts preprocessor without requirements. finish requirements build later * [Add] add requirements and frd submodule * [Fix] remove models submodule * [Add] add am module * [Update] update am and vocoder * [Update] remove submodule * [Update] add models * [Fix] fix init error * [Fix] fix bugs with tts pipeline * merge master * [Update] merge from master * remove frd subdmoule and using wheel from oss * change scripts * [Fix] fix bugs in am and vocoder * [Merge] merge from master * Merge branch 'master' into nls/tts * [Fix] fix bugs * [Fix] fix pep8 * Merge branch 'master' into nls/tts * [Update] remove hparams and import configuration from kwargs * Merge branch 'master' into nls/tts * upgrade tf113 to tf115 * Merge branch 'nls/tts' of gitlab.alibaba-inc.com:Ali-MaaS/MaaS-lib into nls/tts * add multiple versions of ttsfrd * merge master * [Fix] fix cr comments * Merge branch 'master' into nls/tts * [Fix] fix cr comments 0617 * Merge branch 'master' into nls/tts * [Fix] remove comment out codes * [Merge] merge from master * [Fix] fix crash for incompatible tf and pytorch version, and frd using zip file resource * Merge branch 'master' into nls/tts * [Add] add cuda support
3 years ago
[to #9061073] feat: merge tts to master Link: https://code.alibaba-inc.com/Ali-MaaS/MaaS-lib/codereview/9061073 Link: https://code.alibaba-inc.com/Ali-MaaS/MaaS-lib/codereview/9061073 * [to #41669377] docs and tools refinement and release 1. add build_doc linter script 2. add sphinx-docs support 3. add development doc and api doc 4. change version to 0.1.0 for the first internal release version Link: https://code.aone.alibaba-inc.com/Ali-MaaS/MaaS-lib/codereview/8775307 * [to #41669377] add pipeline tutorial and fix bugs 1. add pipleine tutorial 2. fix bugs when using pipeline with certain model and preprocessor Link: https://code.alibaba-inc.com/Ali-MaaS/MaaS-lib/codereview/8814301 * refine doc * refine doc * merge remote release/0.1 and fix conflict * Merge branch 'release/0.1' into 'nls/tts' Release/0.1 See merge request !1700968 * [Add] add tts preprocessor without requirements. finish requirements build later * [Add] add requirements and frd submodule * [Fix] remove models submodule * [Add] add am module * [Update] update am and vocoder * [Update] remove submodule * [Update] add models * [Fix] fix init error * [Fix] fix bugs with tts pipeline * merge master * [Update] merge from master * remove frd subdmoule and using wheel from oss * change scripts * [Fix] fix bugs in am and vocoder * [Merge] merge from master * Merge branch 'master' into nls/tts * [Fix] fix bugs * [Fix] fix pep8 * Merge branch 'master' into nls/tts * [Update] remove hparams and import configuration from kwargs * Merge branch 'master' into nls/tts * upgrade tf113 to tf115 * Merge branch 'nls/tts' of gitlab.alibaba-inc.com:Ali-MaaS/MaaS-lib into nls/tts * add multiple versions of ttsfrd * merge master * [Fix] fix cr comments * Merge branch 'master' into nls/tts * [Fix] fix cr comments 0617 * Merge branch 'master' into nls/tts * [Fix] remove comment out codes * [Merge] merge from master * [Fix] fix crash for incompatible tf and pytorch version, and frd using zip file resource * Merge branch 'master' into nls/tts * [Add] add cuda support
3 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567
  1. #!/usr/bin/env python
  2. # Copyright (c) Alibaba, Inc. and its affiliates.
  3. import argparse
  4. import datetime
  5. import math
  6. import multiprocessing
  7. import os
  8. import subprocess
  9. import sys
  10. import tempfile
  11. import time
  12. import unittest
  13. from fnmatch import fnmatch
  14. from multiprocessing.managers import BaseManager
  15. from pathlib import Path
  16. from turtle import shape
  17. from unittest import TestResult, TextTestResult
  18. import pandas
  19. # NOTICE: Tensorflow 1.15 seems not so compatible with pytorch.
  20. # A segmentation fault may be raise by pytorch cpp library
  21. # if 'import tensorflow' in front of 'import torch'.
  22. # Puting a 'import torch' here can bypass this incompatibility.
  23. import torch
  24. import yaml
  25. from modelscope.utils.logger import get_logger
  26. from modelscope.utils.model_tag import ModelTag, commit_model_ut_result
  27. from modelscope.utils.test_utils import (get_case_model_info, set_test_level,
  28. test_level)
  29. logger = get_logger()
  30. def test_cases_result_to_df(result_list):
  31. table_header = [
  32. 'Name', 'Result', 'Info', 'Start time', 'Stop time',
  33. 'Time cost(seconds)'
  34. ]
  35. df = pandas.DataFrame(
  36. result_list, columns=table_header).sort_values(
  37. by=['Start time'], ascending=True)
  38. return df
  39. def statistics_test_result(df):
  40. total_cases = df.shape[0]
  41. # yapf: disable
  42. success_cases = df.loc[df['Result'] == 'Success'].shape[0]
  43. error_cases = df.loc[df['Result'] == 'Error'].shape[0]
  44. failures_cases = df.loc[df['Result'] == 'Failures'].shape[0]
  45. expected_failure_cases = df.loc[df['Result'] == 'ExpectedFailures'].shape[0]
  46. unexpected_success_cases = df.loc[df['Result'] == 'UnexpectedSuccesses'].shape[0]
  47. skipped_cases = df.loc[df['Result'] == 'Skipped'].shape[0]
  48. # yapf: enable
  49. if failures_cases > 0 or \
  50. error_cases > 0 or \
  51. unexpected_success_cases > 0:
  52. final_result = 'FAILED'
  53. else:
  54. final_result = 'SUCCESS'
  55. result_msg = '%s (Runs=%s,success=%s,failures=%s,errors=%s,\
  56. skipped=%s,expected failures=%s,unexpected successes=%s)' % (
  57. final_result, total_cases, success_cases, failures_cases, error_cases,
  58. skipped_cases, expected_failure_cases, unexpected_success_cases)
  59. model_cases = get_case_model_info()
  60. for model_name, case_info in model_cases.items():
  61. cases = df.loc[df['Name'].str.contains('|'.join(list(case_info)))]
  62. results = cases['Result']
  63. result = None
  64. if any(results == 'Error') or any(results == 'Failures') or any(
  65. results == 'UnexpectedSuccesses'):
  66. result = ModelTag.MODEL_FAIL
  67. elif any(results == 'Success'):
  68. result = ModelTag.MODEL_PASS
  69. elif all(results == 'Skipped'):
  70. result = ModelTag.MODEL_SKIP
  71. else:
  72. print(f'invalid results for {model_name} \n{result}')
  73. if result is not None:
  74. commit_model_ut_result(model_name, result)
  75. print('Testing result summary.')
  76. print(result_msg)
  77. if final_result == 'FAILED':
  78. sys.exit(1)
  79. def gather_test_suites_in_files(test_dir, case_file_list, list_tests):
  80. test_suite = unittest.TestSuite()
  81. for case in case_file_list:
  82. test_case = unittest.defaultTestLoader.discover(
  83. start_dir=test_dir, pattern=case)
  84. test_suite.addTest(test_case)
  85. if hasattr(test_case, '__iter__'):
  86. for subcase in test_case:
  87. if list_tests:
  88. print(subcase)
  89. else:
  90. if list_tests:
  91. print(test_case)
  92. return test_suite
  93. def gather_test_suites_files(test_dir, pattern):
  94. case_file_list = []
  95. for dirpath, dirnames, filenames in os.walk(test_dir):
  96. for file in filenames:
  97. if fnmatch(file, pattern):
  98. case_file_list.append(file)
  99. return case_file_list
  100. def collect_test_results(case_results):
  101. result_list = [
  102. ] # each item is Case, Result, Start time, Stop time, Time cost
  103. for case_result in case_results.successes:
  104. result_list.append(
  105. (case_result.test_full_name, 'Success', '', case_result.start_time,
  106. case_result.stop_time, case_result.time_cost))
  107. for case_result in case_results.errors:
  108. result_list.append(
  109. (case_result[0].test_full_name, 'Error', case_result[1],
  110. case_result[0].start_time, case_result[0].stop_time,
  111. case_result[0].time_cost))
  112. for case_result in case_results.skipped:
  113. result_list.append(
  114. (case_result[0].test_full_name, 'Skipped', case_result[1],
  115. case_result[0].start_time, case_result[0].stop_time,
  116. case_result[0].time_cost))
  117. for case_result in case_results.expectedFailures:
  118. result_list.append(
  119. (case_result[0].test_full_name, 'ExpectedFailures', case_result[1],
  120. case_result[0].start_time, case_result[0].stop_time,
  121. case_result[0].time_cost))
  122. for case_result in case_results.failures:
  123. result_list.append(
  124. (case_result[0].test_full_name, 'Failures', case_result[1],
  125. case_result[0].start_time, case_result[0].stop_time,
  126. case_result[0].time_cost))
  127. for case_result in case_results.unexpectedSuccesses:
  128. result_list.append((case_result.test_full_name, 'UnexpectedSuccesses',
  129. '', case_result.start_time, case_result.stop_time,
  130. case_result.time_cost))
  131. return result_list
  132. def run_command_with_popen(cmd):
  133. with subprocess.Popen(
  134. cmd,
  135. stdout=subprocess.PIPE,
  136. stderr=subprocess.STDOUT,
  137. bufsize=1,
  138. encoding='utf8') as sub_process:
  139. for line in iter(sub_process.stdout.readline, ''):
  140. sys.stdout.write(line)
  141. def async_run_command_with_popen(cmd, device_id):
  142. logger.info('Worker id: %s args: %s' % (device_id, cmd))
  143. env = os.environ.copy()
  144. env['CUDA_VISIBLE_DEVICES'] = '%s' % device_id
  145. sub_process = subprocess.Popen(
  146. cmd,
  147. stdout=subprocess.PIPE,
  148. stderr=subprocess.STDOUT,
  149. bufsize=1,
  150. universal_newlines=True,
  151. env=env,
  152. encoding='utf8')
  153. return sub_process
  154. def save_test_result(df, args):
  155. if args.result_dir is not None:
  156. file_name = str(int(datetime.datetime.now().timestamp() * 1000))
  157. os.umask(0)
  158. Path(args.result_dir).mkdir(mode=0o777, parents=True, exist_ok=True)
  159. Path(os.path.join(args.result_dir, file_name)).touch(
  160. mode=0o666, exist_ok=True)
  161. df.to_pickle(os.path.join(args.result_dir, file_name))
  162. def run_command(cmd):
  163. logger.info('Running command: %s' % ' '.join(cmd))
  164. response = subprocess.run(
  165. cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
  166. try:
  167. response.check_returncode()
  168. logger.info(response.stdout.decode('utf8'))
  169. except subprocess.CalledProcessError as error:
  170. logger.error(
  171. 'stdout: %s, stderr: %s' %
  172. (response.stdout.decode('utf8'), error.stderr.decode('utf8')))
  173. def install_packages(pkgs):
  174. cmd = [sys.executable, '-m', 'pip', 'install']
  175. for pkg in pkgs:
  176. cmd.append(pkg)
  177. run_command(cmd)
  178. def install_requirements(requirements):
  179. for req in requirements:
  180. cmd = [
  181. sys.executable, '-m', 'pip', 'install', '-r',
  182. 'requirements/%s' % req, '-f',
  183. 'https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html'
  184. ]
  185. run_command(cmd)
  186. def wait_for_free_worker(workers):
  187. while True:
  188. for idx, worker in enumerate(workers):
  189. if worker is None:
  190. logger.info('return free worker: %s' % (idx))
  191. return idx
  192. if worker.poll() is None: # running, get output
  193. for line in iter(worker.stdout.readline, ''):
  194. if line != '':
  195. sys.stdout.write(line)
  196. else:
  197. break
  198. else: # worker process completed.
  199. logger.info('Process end: %s' % (idx))
  200. workers[idx] = None
  201. return idx
  202. time.sleep(0.001)
  203. def wait_for_workers(workers):
  204. while True:
  205. for idx, worker in enumerate(workers):
  206. if worker is None:
  207. continue
  208. # check worker is completed.
  209. if worker.poll() is None:
  210. for line in iter(worker.stdout.readline, ''):
  211. if line != '':
  212. sys.stdout.write(line)
  213. else:
  214. break
  215. else:
  216. logger.info('Process idx: %s end!' % (idx))
  217. workers[idx] = None
  218. is_all_completed = True
  219. for idx, worker in enumerate(workers):
  220. if worker is not None:
  221. is_all_completed = False
  222. break
  223. if is_all_completed:
  224. logger.info('All sub porcess is completed!')
  225. break
  226. time.sleep(0.001)
  227. def parallel_run_case_in_env(env_name, env, test_suite_env_map, isolated_cases,
  228. result_dir, parallel):
  229. logger.info('Running case in env: %s' % env_name)
  230. # install requirements and deps # run_config['envs'][env]
  231. if 'requirements' in env:
  232. install_requirements(env['requirements'])
  233. if 'dependencies' in env:
  234. install_packages(env['dependencies'])
  235. # case worker processes
  236. worker_processes = [None] * parallel
  237. for test_suite_file in isolated_cases: # run case in subprocess
  238. if test_suite_file in test_suite_env_map and test_suite_env_map[
  239. test_suite_file] == env_name:
  240. cmd = [
  241. 'python',
  242. 'tests/run.py',
  243. '--pattern',
  244. test_suite_file,
  245. '--result_dir',
  246. result_dir,
  247. ]
  248. worker_idx = wait_for_free_worker(worker_processes)
  249. worker_process = async_run_command_with_popen(cmd, worker_idx)
  250. os.set_blocking(worker_process.stdout.fileno(), False)
  251. worker_processes[worker_idx] = worker_process
  252. else:
  253. pass # case not in run list.
  254. # run remain cases in a process.
  255. remain_suite_files = []
  256. for k, v in test_suite_env_map.items():
  257. if k not in isolated_cases and v == env_name:
  258. remain_suite_files.append(k)
  259. if len(remain_suite_files) == 0:
  260. return
  261. # roughly split case in parallel
  262. part_count = math.ceil(len(remain_suite_files) / parallel)
  263. suites_chunks = [
  264. remain_suite_files[x:x + part_count]
  265. for x in range(0, len(remain_suite_files), part_count)
  266. ]
  267. for suites_chunk in suites_chunks:
  268. worker_idx = wait_for_free_worker(worker_processes)
  269. cmd = [
  270. 'python', 'tests/run.py', '--result_dir', result_dir, '--suites'
  271. ]
  272. for suite in suites_chunk:
  273. cmd.append(suite)
  274. worker_process = async_run_command_with_popen(cmd, worker_idx)
  275. os.set_blocking(worker_process.stdout.fileno(), False)
  276. worker_processes[worker_idx] = worker_process
  277. wait_for_workers(worker_processes)
  278. def run_case_in_env(env_name, env, test_suite_env_map, isolated_cases,
  279. result_dir):
  280. # install requirements and deps # run_config['envs'][env]
  281. if 'requirements' in env:
  282. install_requirements(env['requirements'])
  283. if 'dependencies' in env:
  284. install_packages(env['dependencies'])
  285. for test_suite_file in isolated_cases: # run case in subprocess
  286. if test_suite_file in test_suite_env_map and test_suite_env_map[
  287. test_suite_file] == env_name:
  288. cmd = [
  289. 'python',
  290. 'tests/run.py',
  291. '--pattern',
  292. test_suite_file,
  293. '--result_dir',
  294. result_dir,
  295. ]
  296. run_command_with_popen(cmd)
  297. else:
  298. pass # case not in run list.
  299. # run remain cases in a process.
  300. remain_suite_files = []
  301. for k, v in test_suite_env_map.items():
  302. if k not in isolated_cases and v == env_name:
  303. remain_suite_files.append(k)
  304. if len(remain_suite_files) == 0:
  305. return
  306. cmd = ['python', 'tests/run.py', '--result_dir', result_dir, '--suites']
  307. for suite in remain_suite_files:
  308. cmd.append(suite)
  309. run_command_with_popen(cmd)
  310. def run_in_subprocess(args):
  311. # only case args.isolated_cases run in subporcess, all other run in a subprocess
  312. test_suite_files = gather_test_suites_files(
  313. os.path.abspath(args.test_dir), args.pattern)
  314. run_config = None
  315. isolated_cases = []
  316. test_suite_env_map = {}
  317. # put all the case in default env.
  318. for test_suite_file in test_suite_files:
  319. test_suite_env_map[test_suite_file] = 'default'
  320. if args.run_config is not None and Path(args.run_config).exists():
  321. with open(args.run_config, encoding='utf-8') as f:
  322. run_config = yaml.load(f, Loader=yaml.FullLoader)
  323. if 'isolated' in run_config:
  324. isolated_cases = run_config['isolated']
  325. if 'envs' in run_config:
  326. for env in run_config['envs']:
  327. if env != 'default':
  328. for test_suite in run_config['envs'][env]['tests']:
  329. if test_suite in test_suite_env_map:
  330. test_suite_env_map[test_suite] = env
  331. if args.subprocess: # run all case in subprocess
  332. isolated_cases = test_suite_files
  333. with tempfile.TemporaryDirectory() as temp_result_dir:
  334. for env in set(test_suite_env_map.values()):
  335. parallel_run_case_in_env(env, run_config['envs'][env],
  336. test_suite_env_map, isolated_cases,
  337. temp_result_dir, args.parallel)
  338. result_dfs = []
  339. result_path = Path(temp_result_dir)
  340. for result in result_path.iterdir():
  341. if Path.is_file(result):
  342. df = pandas.read_pickle(result)
  343. result_dfs.append(df)
  344. result_pd = pandas.concat(
  345. result_dfs) # merge result of every test suite.
  346. print_table_result(result_pd)
  347. print_abnormal_case_info(result_pd)
  348. statistics_test_result(result_pd)
  349. def get_object_full_name(obj):
  350. klass = obj.__class__
  351. module = klass.__module__
  352. if module == 'builtins':
  353. return klass.__qualname__
  354. return module + '.' + klass.__qualname__
  355. class TimeCostTextTestResult(TextTestResult):
  356. """Record test case time used!"""
  357. def __init__(self, stream, descriptions, verbosity):
  358. self.successes = []
  359. return super(TimeCostTextTestResult,
  360. self).__init__(stream, descriptions, verbosity)
  361. def startTest(self, test):
  362. test.start_time = datetime.datetime.now()
  363. test.test_full_name = get_object_full_name(
  364. test) + '.' + test._testMethodName
  365. self.stream.writeln('Test case: %s start at: %s' %
  366. (test.test_full_name, test.start_time))
  367. return super(TimeCostTextTestResult, self).startTest(test)
  368. def stopTest(self, test):
  369. TextTestResult.stopTest(self, test)
  370. test.stop_time = datetime.datetime.now()
  371. test.time_cost = (test.stop_time - test.start_time).total_seconds()
  372. self.stream.writeln(
  373. 'Test case: %s stop at: %s, cost time: %s(seconds)' %
  374. (test.test_full_name, test.stop_time, test.time_cost))
  375. if torch.cuda.is_available(
  376. ) and test.time_cost > 5.0: # print nvidia-smi
  377. cmd = ['nvidia-smi']
  378. run_command_with_popen(cmd)
  379. super(TimeCostTextTestResult, self).stopTest(test)
  380. def addSuccess(self, test):
  381. self.successes.append(test)
  382. super(TextTestResult, self).addSuccess(test)
  383. class TimeCostTextTestRunner(unittest.runner.TextTestRunner):
  384. resultclass = TimeCostTextTestResult
  385. def run(self, test):
  386. return super(TimeCostTextTestRunner, self).run(test)
  387. def _makeResult(self):
  388. result = super(TimeCostTextTestRunner, self)._makeResult()
  389. return result
  390. def gather_test_cases(test_dir, pattern, list_tests):
  391. case_list = []
  392. for dirpath, dirnames, filenames in os.walk(test_dir):
  393. for file in filenames:
  394. if fnmatch(file, pattern):
  395. case_list.append(file)
  396. test_suite = unittest.TestSuite()
  397. for case in case_list:
  398. test_case = unittest.defaultTestLoader.discover(
  399. start_dir=test_dir, pattern=case)
  400. test_suite.addTest(test_case)
  401. if hasattr(test_case, '__iter__'):
  402. for subcase in test_case:
  403. if list_tests:
  404. print(subcase)
  405. else:
  406. if list_tests:
  407. print(test_case)
  408. return test_suite
  409. def print_abnormal_case_info(df):
  410. df = df.loc[(df['Result'] == 'Error') | (df['Result'] == 'Failures')]
  411. for _, row in df.iterrows():
  412. print('Case %s run result: %s, msg:\n%s' %
  413. (row['Name'], row['Result'], row['Info']))
  414. def print_table_result(df):
  415. df = df.loc[df['Result'] != 'Skipped']
  416. df = df.drop('Info', axis=1)
  417. formatters = {
  418. 'Name': '{{:<{}s}}'.format(df['Name'].str.len().max()).format,
  419. 'Result': '{{:<{}s}}'.format(df['Result'].str.len().max()).format,
  420. }
  421. with pandas.option_context('display.max_rows', None, 'display.max_columns',
  422. None, 'display.width', None):
  423. print(df.to_string(justify='left', formatters=formatters, index=False))
  424. def main(args):
  425. runner = TimeCostTextTestRunner()
  426. if args.suites is not None and len(args.suites) > 0:
  427. logger.info('Running: %s' % ' '.join(args.suites))
  428. test_suite = gather_test_suites_in_files(args.test_dir, args.suites,
  429. args.list_tests)
  430. else:
  431. test_suite = gather_test_cases(
  432. os.path.abspath(args.test_dir), args.pattern, args.list_tests)
  433. if not args.list_tests:
  434. result = runner.run(test_suite)
  435. logger.info('Running case completed, pid: %s, suites: %s' %
  436. (os.getpid(), args.suites))
  437. result = collect_test_results(result)
  438. df = test_cases_result_to_df(result)
  439. if args.result_dir is not None:
  440. save_test_result(df, args)
  441. else:
  442. print_table_result(df)
  443. print_abnormal_case_info(df)
  444. statistics_test_result(df)
  445. if __name__ == '__main__':
  446. parser = argparse.ArgumentParser('test runner')
  447. parser.add_argument(
  448. '--list_tests', action='store_true', help='list all tests')
  449. parser.add_argument(
  450. '--pattern', default='test_*.py', help='test file pattern')
  451. parser.add_argument(
  452. '--test_dir', default='tests', help='directory to be tested')
  453. parser.add_argument(
  454. '--level', default=0, type=int, help='2 -- all, 1 -- p1, 0 -- p0')
  455. parser.add_argument(
  456. '--disable_profile', action='store_true', help='disable profiling')
  457. parser.add_argument(
  458. '--run_config',
  459. default=None,
  460. help='specified case run config file(yaml file)')
  461. parser.add_argument(
  462. '--subprocess',
  463. action='store_true',
  464. help='run all test suite in subprocess')
  465. parser.add_argument(
  466. '--result_dir',
  467. default=None,
  468. help='Save result to directory, internal use only')
  469. parser.add_argument(
  470. '--parallel',
  471. default=1,
  472. type=int,
  473. help='Set case parallels, default single process, set with gpu number.'
  474. )
  475. parser.add_argument(
  476. '--suites',
  477. nargs='*',
  478. help='Run specified test suites(test suite files list split by space)')
  479. args = parser.parse_args()
  480. set_test_level(args.level)
  481. os.environ['REGRESSION_BASELINE'] = '1'
  482. logger.info(f'TEST LEVEL: {test_level()}')
  483. if not args.disable_profile:
  484. from utils import profiler
  485. logger.info('enable profile ...')
  486. profiler.enable()
  487. if args.run_config is not None or args.subprocess:
  488. run_in_subprocess(args)
  489. else:
  490. main(args)