With the rapid increase in the proportion of elderly and disabled people around the world, the demand for efficient healthcare solutions that support daily life activities such as dressing is becoming increasingly urgent. Existing methods still have difficulty in dealing with highly deformable clothing and achieving adaptive interaction across users of different body shapes. This paper presents an innovative bimanual robot-assisted dressing system, which employs MediaPipe to detect human keypoints and leverages a Multi-layer Perceptron (MLP) to output user posture information with low latency. Evaluation across 80 dressing trials demonstrated a high success rate of 95% under diverse user scenarios and clothing conditions, significantly surpassing the baseline method that relied solely on position control 61.25% significantly improving the comfort and the success rate of the dressing process, and showing a wide range of application prospects in actual healthcare environments.