They talk about checking in generated files, but they also talk about using Bazel as the build system.
They’re holding it wrong.
Just define a BUILD target to generate the files. Don’t check them in. Any other target that depends on the generated files can depend on the target that generated them rather than depending on the files directly.
My guess is that they haven’t fully embraced Bazel, so there must be parts of the CI/CD that are not defined as Bazel targets that also need these files…
That’s a naive take. These are no random autogenerated files. These are translation files. Even in the smoothest-running build systems and CICD pipelines, these can and often go wrong, because there is still an important human factor in generating translations. A regression hitting localization data means your whole system can become unusable for a whole portion of your userbase without having a good way to detect, track, and even monitor your apps.
Checking these files into the build system is the only reliable way to track changes in translation and accessibility data, and pinpoint regressions.
Source: I’ve worked for a company who had an internal translation service which by design required no human interaction and should only be integrated as a post-build step, and that system failed often and catastrophically. The only surefire way of tracking the mess it made was to commit those files and trwck changes per commit as part of pull requests.
The creator of Bazel–Google–also checks in their generated translation files. They don’t generate them on the fly. They use a caching fuse filesystem on top of perforce to make it efficient. Teams that use git within Google are encouraged to use many of the same tactics mentioned in this article.
They talk about checking in generated files, but they also talk about using Bazel as the build system.
They’re holding it wrong.
Just define a BUILD target to generate the files. Don’t check them in. Any other target that depends on the generated files can depend on the target that generated them rather than depending on the files directly.
My guess is that they haven’t fully embraced Bazel, so there must be parts of the CI/CD that are not defined as Bazel targets that also need these files…
That’s a naive take. These are no random autogenerated files. These are translation files. Even in the smoothest-running build systems and CICD pipelines, these can and often go wrong, because there is still an important human factor in generating translations. A regression hitting localization data means your whole system can become unusable for a whole portion of your userbase without having a good way to detect, track, and even monitor your apps.
Checking these files into the build system is the only reliable way to track changes in translation and accessibility data, and pinpoint regressions.
Source: I’ve worked for a company who had an internal translation service which by design required no human interaction and should only be integrated as a post-build step, and that system failed often and catastrophically. The only surefire way of tracking the mess it made was to commit those files and trwck changes per commit as part of pull requests.
The creator of Bazel–Google–also checks in their generated translation files. They don’t generate them on the fly. They use a caching fuse filesystem on top of perforce to make it efficient. Teams that use git within Google are encouraged to use many of the same tactics mentioned in this article.
deleted by creator