Abstracts:Multitask learning (MTL) aims to learn multiple tasks simultaneously while exploiting their mutual relationships. By using shared resources to simultaneously calculate multiple outputs, this learning paradigm has the potential to have lower memory requirements and inference times compared to the traditional approach of using separate methods for each task. Previous work in MTL has mainly focused on fully supervised methods, as task relationships (TRs) can not only be leveraged to lower the level of data dependency of those methods but also improve the performance. However, MTL introduces a set of challenges due to a complex optimization scheme and a higher labeling requirement. This article focuses on how MTL could be utilized under different partial supervision settings to address these challenges. First, this article analyses how MTL traditionally uses different parameter sharing techniques to transfer knowledge in between tasks. Second, it presents different challenges arising from such a multiobjective optimization (MOO) scheme. Third, it introduces how task groupings (TGs) can be achieved by analyzing TRs. Fourth, it focuses on how partially supervised methods applied to MTL can tackle the aforementioned challenges. Lastly, this article presents the available datasets, tools, and benchmarking results of such methods. The reviewed articles, categorized following this work, are available at https://github.com/Klodivio355/MTL-CV-Review.