【Flink】关于jvm元空间溢出,mysql binlog冲突的问题解决
问题一:7张表是同一个mysql中的,我们进行增量同步时分别用不同的flink任务读取,造成mysql server-id冲突问题,如下:
Caused by: io.debezium.DebeziumException: A slave with the same server_uuid/server_id as this slave has connected to the master; the first event ‘’ at 4, the last event read from ‘/home/mysql/log/mysql/mysql-bin.003630’ at 62726118, the last byte read from ‘/home/mysql/log/mysql/mysql-bin.003630’ at 62726118. Error code: 1236; SQLSTATE: HY000.
问题分析:主要是多个任务都读取的同一个binlog造成serverid冲突;
网上有设置server-id的方法,这个是可以解决的,但是需要分方案来谈;
例如:我们是需要一个任务通用的(7张表重复提交7次,更改flink传参)
这种就不适合我们,因为每次的任务都是同一个jar,读取一段时间后还是会报错;
解决方案:其实我们可以把在同一个mysql库里面的表放到同一个source里面;
解决代码:我们是mysql同步到starrocks里面,使用的sr官方的sink function,不同sink的可以参考这个源代码;
官方是推荐采用StarRocksSink.sink(options)这种方式,我们查看源码,其实是采用的v2这一个fun,点进去发现,sr官方对数据进行了处理,只需要匹配对应的类型即可;
所以我们只需要对mysqlSource进行一个算子转换即可,关键代码如下:
MySqlSourceBuilder<String> builder = new MySqlSourceBuilder<>(); MySqlSource<String> mySqlSource = builder .hostname(class="lazy" data-srcHost) .port(3306) .databaseList(class="lazy" data-srcDb) // 格式 db.table,db.table2...... .tableList(class="lazy" data-srcTable) .username(class="lazy" data-srcUsername) .password(class="lazy" data-srcPassword) .jdbcProperties(jbdcProperties) .debeziumProperties(properties) // 这里反序列化我进行了StarRocks增删类型处理,具体可以看我这片文章https://blog.csdn.net/JGMa_TiMo/article/details/128327546 .deserializer(jsonStringDebeziumDeserializationSchema) .serverId("5400-6400") .build(); DataStreamSource<String> streamSource = env.fromSource(mySqlSource, WatermarkStrategy.forMonotonousTimestamps(), "[<< job: >>" + propKey + "]");// .setParallelism(parallelism); // todo 这里是关键地方,我们反序列化只能返回flink认可的类型,一般都是string,这里转换成上面sr可以处理的对象 StarRocksSinkRowDataWithMeta SingleOutputStreamOperator<StarRocksSinkRowDataWithMeta> streamOperator = streamSource.flatMap(new FlatMapFunction<String, StarRocksSinkRowDataWithMeta>() { @Override public void flatMap(String value, Collector<StarRocksSinkRowDataWithMeta> collector) throws Exception { HashMap hashMap = JsonUtils.parseObject(value, HashMap.class); StarRocksSinkRowDataWithMeta sinkRowDataWithMeta = new StarRocksSinkRowDataWithMeta(); sinkRowDataWithMeta.addDataRow(value); assert hashMap != null; sinkRowDataWithMeta.setTable(hashMap.get("__table").toString()); sinkRowDataWithMeta.setDatabase(sinkDb); collector.collect(sinkRowDataWithMeta); } }).name("Data Filtering"); StarRocksSinkOptions sinkOptions = StarRocksSinkOptions.builder() .withProperty("jdbc-url", "jdbc:mysql://" + sinkHost + ":9030?characterEncoding=utf-8&useSSL=false&connectionTimeZone=Asia/Shanghai") .withProperty("load-url", sinkHost + ":8030") .withProperty("database-name", sinkDb) .withProperty("username", sinkUsername) .withProperty("password", sinkPassword) // 这里的设置会被多class="lazy" data-srcTable覆盖 .withProperty("table-name", "") .withProperty("sink.properties.format", "json") .withProperty("sink.properties.strip_outer_array", "true") .build();// 这里查看sr sink源码实际使用的是这个fun,我们不要使用SinkFunctionFactory生成:会泛型不支持 streamOperator.addSink(new StarRocksDynamicSinkFunctionV2<>(sinkOptions)) .name(">>>StarRocks " + propKey + " Sink<<<").uid(UUID.randomUUID().toString());// streamOperator.addSink(StarRocksSink.sink(sinkOptions))// .name(">>>StarRocks " + propKey + " Sink<<<").uid(UUID.randomUUID().toString()); env.execute(propKey + "<< stream sync job >>" + class="lazy" data-srcHost + class="lazy" data-srcDb);
解读
:原理就是我们把原来7张在一个数据库的表放到一个flink source中读取,在指定传输到那个starrocks表时,官方已经实现了代码支持,我们只需要增加一个flink算子转换成sink支持的对象即可,(关联一个source对应多个sink解决思路)
问题二:我们是采用datastream api开发的flink任务,在web-ui界面提交任务,造成taskManager的jvm Metaspace一直增长直到节点挂掉。
报错就不贴了,就是taskmanager会自动挂掉,查看tm的日志是oom异常:jvm metaspace溢出;
问题分析:我们采用web-ui多次提交flink任务时,flink是动态类加载,所以不会释放上一个jar的元空间,才会造成jvm垃圾不回收;
可以看官方的issues:
https://issues.apache.org/jira/browse/FLINK-11205
https://issues.apache.org/jira/browse/FLINK-16408
问题解决:把jar放到flink/lib目录下就可以了,这样flink会优先加载父加载器中的类;
这里需要注意和以前的jar会造成版本冲突,具体解决你可以根据报错信息慢慢调试,我这里贴一个我的环境信息:
当前环境中的lib包
我增加的lib包
下面是我的jar包依赖,打完包放到lib中即可
<properties> <project.build.sourceEncoding>UTF-8project.build.sourceEncoding> <maven.compiler.source>8maven.compiler.source> <maven.compiler.target>8maven.compiler.target> <flink.version>1.15.3flink.version> <flink.connector.sr>1.2.5_flink-1.15flink.connector.sr> <flink.connector.mysql>2.3.0flink.connector.mysql> <scala.binary.version>2.12scala.binary.version> properties> <dependencies> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-runtimeartifactId> <version>${flink.version}version> dependency> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-table-api-javaartifactId> <version>${flink.version}version> dependency> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-connector-baseartifactId> <version>${flink.version}version> dependency> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-clientsartifactId> <version>${flink.version}version> dependency> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-streaming-javaartifactId> <version>${flink.version}version> dependency> <dependency> <groupId>com.starrocksgroupId> <artifactId>flink-connector-starrocksartifactId> <version>${flink.connector.sr}version> dependency> <dependency> <groupId>com.ververicagroupId> <artifactId>flink-connector-mysql-cdcartifactId> <version>${flink.connector.mysql}version> dependency> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-test-utilsartifactId> <version>${flink.version}version> dependency> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-table-test-utilsartifactId> <version>${flink.version}version> dependency> <dependency> <groupId>mysqlgroupId> <artifactId>mysql-connector-javaartifactId> <version>8.0.29version> dependency> <dependency> <groupId>org.projectlombokgroupId> <artifactId>lombokartifactId> <version>1.18.26version> dependency> <dependency> <groupId>org.slf4jgroupId> <artifactId>slf4j-apiartifactId> <version>2.0.4version> dependency> <dependency> <groupId>ch.qos.logbackgroupId> <artifactId>logback-classicartifactId> <version>1.3.4version> dependency> <dependency> <groupId>org.apache.kafkagroupId> <artifactId>connect-apiartifactId> <version>2.7.1version> dependency> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-shaded-hadoop-2artifactId> <version>2.8.3-10.0version> dependency> <dependency> <groupId>commons-cligroupId> <artifactId>commons-cliartifactId> <version>1.5.0version> dependency> <dependency> <groupId>log4jgroupId> <artifactId>log4jartifactId> <version>1.2.17version> dependency> <dependency> <groupId>cn.hutoolgroupId> <artifactId>hutool-allartifactId> <version>5.8.15version> dependency> <dependency> <groupId>org.springframeworkgroupId> <artifactId>spring-beansartifactId> <version>5.3.26version> dependency> <dependency> <groupId>org.yamlgroupId> <artifactId>snakeyamlartifactId> <version>2.0version> dependency> dependencies>
问题二:最后就是在web-ui提交任务,你也可以用命令行,这里我用的源码包,就是我开发的实际编写代码,就几十K
最后如果解决了你的问题,请点个赞吧
来源地址:https://blog.csdn.net/JGMa_TiMo/article/details/132695191
免责声明:
① 本站未注明“稿件来源”的信息均来自网络整理。其文字、图片和音视频稿件的所属权归原作者所有。本站收集整理出于非商业性的教育和科研之目的,并不意味着本站赞同其观点或证实其内容的真实性。仅作为临时的测试数据,供内部测试之用。本站并未授权任何人以任何方式主动获取本站任何信息。
② 本站未注明“稿件来源”的临时测试数据将在测试完成后最终做删除处理。有问题或投稿请发送至: 邮箱/279061341@qq.com QQ/279061341