tencent cloud

Feedback

Abnormal JobManager Pod Exit

Last updated: 2023-11-07 16:34:40

    Overview

    The JobManager of a Flink job manages and schedules the whole job, and its failure may cause severe consequences such as job crash and state loss. Therefore, the system will continuously detect and push abnormal JobManager exit events. In addition, to guarantee JobManager availability, HA configurations are enabled for each job, so that a new JobManager can be automatically selected and the job recovered when the existing JobManager exits unexpectedly. In case of an abnormal JobManager Pod exit, the job generally can be automatically recovered, but the job completeness after recovery depends on whether checkpointing is enabled for the job and on the specific implementation logic of operators. Therefore, we recommend you check the job outputs (such as error data and duplicated data) after the job is recovered.
    Note
    The same Pod may be re-built several times by Kubernetes due to exceptions, so you may receive an identical event several times.

    Trigger conditions

    The system monitors the exit of the TaskManager Pod in real time, and determines whether an exit is caused by SIGTERM based on the exit code (the normal exit code is 143). An exit code other than 143 indicates that the exit is not initiated by the JobManager, but ‍is caused by TaskManager errors, and this case is determined as an abnormal Pod exit event.

    Alarm configuration

    You can configure an alarm policy as instructed in [Configuring Event Alarms (Events)] for this event to receive trigger and clearing notifications in real time.

    Suggestions

    Status Code
    Possible Cause
    Solution
    137
    The memory occupied by the job exceeded the memory quota of the Pod, and the Pod was killed due to OOM.
    This may be caused by inappropriate implementation of the source connector, with high memory pressure on the JobManager.
    If no cause can be identified, submit a ticket to contact the technicians for help.
    -1
    This is the code of the basic policy, indicating that the Pod has exited, but no exit code is returned due to system errors or other reasons.
    Submit a ticket to contact the technicians for help.
    0
    During the startup process, the Pod cannot assign IPs in the associated subnet (no IPs available, for example), resulting in startup failure and exit.
    Check whether available IPs are sufficient in the VPC subnet associated with the cluster. If yes, submit a ticket to contact the technicians for help.
    1
    An exception occurred during Flink initialization, resulting in startup failure.
    This is generally caused by basic conflicts or overwriting of critical configuration files. You can search logs by Could not start cluster entrypoint and view relevant exceptions.
    If no cause can be identified, submit a ticket to contact the technicians for help.
    2
    A fatal error occurred during the startup of the Flink JobManager.
    Search logs by Fatal error occurred in the cluster entrypoint and view relevant exceptions.
    If no cause can be identified, submit a ticket to contact the technicians for help.
    239
    An uncaptured fatal error occurred in Flink execution threads.
    Search logs by produced an uncaught exception. Stopping the process and other keywords and view relevant exceptions.
    If no cause can be identified, submit a ticket to contact the technicians for help.
    
    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support