Automatic segmentation and classification of brain tumors are of great importance to clinical treatment. However, they are challenging due to the varied and small morphology of the tumors. In this paper, we propose a multitask multiscale residual attention network (MMRAN) to simultaneously solve the problem of accurately segmenting and classifying brain tumors. The proposed MMRAN is based on U-Net, and a parallel branch is added at the end of the encoder as the classification network. First, we propose a novel multiscale residual attention module (MRAM) that can aggregate contextual features and combine channel attention and spatial attention better and add it to the shared parameter layer of MMRAN. Second, we propose a method of dynamic weight training that can improve model performance while minimizing the need for multiple experiments to determine the optimal weights for each task. Finally, prior knowledge of brain tumors is added to the postprocessing of segmented images to further improve the segmentation accuracy. We evaluated MMRAN on a brain tumor data set containing meningioma, glioma, and pituitary tumors. In terms of segmentation performance, our method achieves Dice, Hausdorff distance (HD), mean intersection over union (MIoU), and mean pixel accuracy (MPA) values of 80.03%, 6.649 mm, 84.38%, and 89.41%, respectively. In terms of classification performance, our method achieves accuracy, recall, precision, and F1-score of 89.87%, 90.44%, 88.56%, and 89.49%, respectively. Compared with other networks, MMRAN performs better in segmentation and classification, which significantly aids medical professionals in brain tumor management. The code and data set are available at
https://github.com/linkenfaqiu/MMRAN.