记录两个Windows和Mac上部署阿里Canal无法启动的神坑
目录
一、问题列表
问题一:点击 startup.bat 窗口出现后立马闪退的问题。
问题二:启动后日志文件报错:
ERROR com.alibaba.otter.canal.deployer.CanalLauncher - ## Something goes wrong when starting up the canal Server:java.lang.IllegalStateException: Extension instance(name: kafka, class: interface com.alibaba.otter.canal.connector.core.spi.CanalMQProducer) could not be instantiated: class could not be foundat com.alibaba.otter.canal.connector.core.spi.ExtensionLoader.createExtension(ExtensionLoader.java:169) ~[connector.core-1.1.7-SNAPSHOT.jar:na]at com.alibaba.otter.canal.connector.core.spi.ExtensionLoader.getExtension(ExtensionLoader.java:121) ~[connector.core-1.1.7-SNAPSHOT.jar:na]at com.alibaba.otter.canal.deployer.CanalStarter.start(CanalStarter.java:69) ~[canal.deployer-1.1.7-SNAPSHOT.jar:na]at com.alibaba.otter.canal.deployer.CanalLauncher.main(CanalLauncher.java:124) ~[canal.deployer-1.1.7-SNAPSHOT.jar:na]
问题三、启动报一下错误:
2019-04-22 18:45:04.182 [destination = example , address = /127.0.0.1:3306 , EventParser] ERROR c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - dump address /127.0.0.1:3306 has an error, retrying. caused by java.io.IOException: Received error packet: errno = 1236, sqlstate = HY000 errmsg = Could not find first log file name in binary log index fileat com.alibaba.otter.canal.parse.inbound.mysql.dbsync.DirectLogFetcher.fetch(DirectLogFetcher.java:102) ~[canal.parse-1.1.0.jar:na]at com.alibaba.otter.canal.parse.inbound.mysql.MysqlConnection.dump(MysqlConnection.java:213) ~[canal.parse-1.1.0.jar:na]at com.alibaba.otter.canal.parse.inbound.AbstractEventParser$3.run(AbstractEventParser.java:248) ~[canal.parse-1.1.0.jar:na]at java.lang.Thread.run(Thread.java:748) [na:1.8.0_202]
二、解决方案
问题一解决方案:编辑 startup.bat 文件,在最后面加入:pause
问题二解决方案:把plugin目录里的jar包拷贝到lib就行了。
问题三解决方案:删除Canal项目中conf文件夹产生的meta.bat文件,之后再重启即可。
三、参考资料
canal部署安装参考链接:https://blog.csdn.net/qq_40309183/article/details/124497076
canal下载地址:https://github.com/alibaba/canal/releases
四、配置详解
conf\example\instance.properties 文件
这个配置文件一般只需要改以下参数,其他的使用默认的即可:
canal.instance.master.address=127.0.0.1:3306 // 把地址换成自己的
canal.instance.defaultDatabaseName=test // 数据库名称换成自己的
################################################### mysql serverId , v1.0.26+ will autoGen # canal.instance.mysql.slaveId=0 //每个instance都会伪装成一个mysql slave , 此id对于canal前端的Mysql实例而言,必须是唯一的,但是同一个Canal中相同的instance,此slaveld应该一样 # enable gtid use true/falsecanal.instance.gtidon=false # position infocanal.instance.master.address=127.0.0.1:3306 //需要连接的数据库地址及端口canal.instance.master.journal.name= //需要读取的起始的binlog文件canal.instance.master.position= //需要读取的起始的binlog文件的偏移量canal.instance.master.timestamp= //需要读取的起始的binlog的时间戳 canal.instance.master.gtid= # rds oss binlogcanal.instance.rds.accesskey=canal.instance.rds.secretkey=canal.instance.rds.instanceId= # table meta tsdb infocanal.instance.tsdb.enable=true //v1.0.25版本新增,是否开启table meta的时间序列版本记录功能#canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb //v1.0.25版本新增,table meta的时间序列版本的本地存储路径,默认为instance目录#canal.instance.tsdb.dbUsername=canal#canal.instance.tsdb.dbPassword=canal #canal.instance.standby.address =#canal.instance.standby.journal.name =#canal.instance.standby.position =#canal.instance.standby.timestamp =#canal.instance.standby.gtid= # username/passwordcanal.instance.dbUsername=canal //数据库账号canal.instance.dbPassword=canal //数据库密码canal.instance.connectionCharset=UTF-8 //数据库解析编码格式canal.instance.defaultDatabaseName=test //数据库连接时默认schema# enable druid Decrypt database passwordcanal.instance.enableDruid=false#canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ== # table regexcanal.instance.filter.regex=.*\\..* //mysql 数据解析关注的表,Perl正则表达式.# table black regexcanal.instance.filter.black.regex= //canal将会过滤那些不符合要求的table,这些table的数据将不会被解析和传送 #################################################
conf\canal.properties 文件
这个配置文件一般只要修改以下参数:
canal.serverMode = tcp // 换成自己的中间件,例如:tcp, kafka, rocketMQ, rabbitMQ, pulsarMQ
最下面的中间件的配置,根据自己用的来配置对应的地址即可
########################################################## common argument############# #################################################canal.id= 1 #每个canal server实例的唯一标识canal.ip= #canal server绑定的本地IP信息,如果不配置,默认选择一个本机IP进行,canal.port=11111 #canal server提供socket tcp服务的端口canal.metrics.pull.port=11112canal.zkServers= #canal server链接zookeeper集群的链接信息 # flush data to zkcanal.zookeeper.flush.period = 1000 #canal持久化数据到zookeeper上的更新频率,单位毫秒canal.withoutNetty = false # tcp, kafka, rocketMQ, rabbitMQ, pulsarMQcanal.serverMode = tcp # 服务模式 # flush meta cursor/parse position to file canal.file.data.dir = ${canal.conf.dir} #canal持久化数据到file上的目录 canal.file.flush.period = 1000 #canal持久化数据到file上的更新频率,单位毫秒 ## memory store RingBuffer size, should be Math.pow(2,n) canal.instance.memory.buffer.size = 16384 #canal内存store中可缓存buffer记录数,需要为2的指数## memory store RingBuffer used memory unit size , default 1kbcanal.instance.memory.buffer.memunit = 1024 #内存记录的单位大小,默认1KB,和buffer.size组合决定最终的内存使用大小## meory store gets mode used MEMSIZE or ITEMSIZEcanal.instance.memory.batch.mode = MEMSIZE #canal内存store中数据缓存模式 1. ITEMSIZE : 根据buffer.size进行限制,只限制记录的数量 2. MEMSIZE : 根据buffer.size * buffer.memunit的大小,限制缓存记录的大小 canal.instance.memory.rawEntry = true ## detecing configcanal.instance.detecting.enable = false #是否开启心跳检查 #canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()canal.instance.detecting.sql = select 1 #心跳检查sqlcanal.instance.detecting.interval.time = 3 #心跳检查频率,单位秒 canal.instance.detecting.retry.threshold = 3 #心跳检查失败重试次数##非常注意:interval.time * retry.threshold值,应该参考既往DBA同学对数据库的故障恢复时间, ##“太短”会导致集群运行态角色“多跳”;“太长”失去了活性检测的意义,导致集群的敏感度降低,Consumer断路可能性增加。 canal.instance.detecting.heartbeatHaEnable = false #心跳检查失败后,是否开启自动mysql自动切换 #说明:比如心跳检查失败超过阀值后,如果该配置为true,canal就会自动链到mysql备库获取binlog数据false # support maximum transaction size, more than the size of the transaction will be cut into multiple transactions deliverycanal.instance.transaction.size = 1024 #最大事务完整解析的长度支持超过该长度后,一个事务可能会被拆分成多次提交到canal store中,无法保证事务的完整可见性 # mysql fallback connected to new master should fallback timescanal.instance.fallbackIntervalInSeconds = 60 #canal发生mysql切换时,在新的mysql库上查找 binlog时需要往前查找的时间,单位秒说明:mysql主备库可能存在解析延迟或者时钟不统一,需要回退一段时间,保证数据不丢 # network configcanal.instance.network.receiveBufferSize = 16384 #网络链接参数,SocketOptions.SO_RCVBUFcanal.instance.network.sendBufferSize = 16384 #网络链接参数,SocketOptions.SO_SNDBUFcanal.instance.network.soTimeout = 30 #网络链接参数,SocketOptions.SO_TIMEOUT # binlog filter configcanal.instance.filter.druid.ddl = true canal.instance.filter.query.dcl = false #ddl语句是否隔离发送,开启隔离可保证每次只返回发送一条ddl数据,不和其他dml语句混合返回.(otter ddl同步使用) canal.instance.filter.query.dml = false #是否忽略DML的query语句,比如insert/update/delete table.(mysql5.6的ROW模式可以包含statement模式的query记录) canal.instance.filter.query.ddl = false #是否忽略DDL的query语句,比如create table/alater table/drop table/rename table/create index/drop index. (目前支持的ddl类型主要为table级别的操作,create databases/trigger/procedure暂时划分为dcl类型) canal.instance.filter.table.error = falsecanal.instance.filter.rows = falsecanal.instance.filter.transaction.entry = false # binlog format/image checkcanal.instance.binlog.format = ROW,STATEMENT,MIXED canal.instance.binlog.image = FULL,MINIMAL,NOBLOB # binlog ddl isolationcanal.instance.get.ddl.isolation = false # parallel parser configcanal.instance.parser.parallel = true## concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()#canal.instance.parser.parallelThreadSize = 16## disruptor ringbuffer size, must be power of 2canal.instance.parser.parallelBufferSize = 256 # table meta tsdb info //关于时间序列版本canal.instance.tsdb.enable=truecanal.instance.tsdb.dir=${canal.file.data.dir:../conf}/${canal.instance.destination:}canal.instance.tsdb.url=jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL;canal.instance.tsdb.dbUsername=canalcanal.instance.tsdb.dbPassword=canal# dump snapshot interval, default 24 hourcanal.instance.tsdb.snapshot.interval=24# purge snapshot expire , default 360 hour(15 days)canal.instance.tsdb.snapshot.expire=360 # rds oss binlog accountcanal.instance.rds.accesskey =canal.instance.rds.secretkey = ########################################################## destinations############# #################################################canal.destinations= example# conf root dircanal.conf.dir = ../conf# auto scan instance dir add/remove and start/stop instancecanal.auto.scan = true #开启instance自动扫描如果配置为true,canal.conf.dir目录下的instance配置变化会自动触发:a. instance目录新增: 触发instance配置载入,lazy为true时则自动启动b. instance目录删除:卸载对应instance配置,如已启动则进行关闭c. instance.properties文件变化:reload instance配置,如已启动自动进行重启操作 canal.auto.scan.interval = 5 #instance自动扫描的间隔时间,单位秒 canal.instance.tsdb.spring.xml=classpath:spring/tsdb/h2-tsdb.xml#canal.instance.tsdb.spring.xml=classpath:spring/tsdb/mysql-tsdb.xml canal.instance.global.mode = spring #instance管理模式,Production级别我们要求使用spring canal.instance.global.lazy = false #全局lazy模式#canal.instance.global.manager.address = 127.0.0.1:1099 #全局的manager配置方式的链接信息 #canal.instance.global.spring.xml = classpath:spring/memory-instance.xmlcanal.instance.global.spring.xml = classpath:spring/file-instance.xml #全局的spring配置方式的组件文件 #canal.instance.global.spring.xml = classpath:spring/default-instance.xml########################################################### MQ Properties ################################################################ aliyun ak/sk , support rds/mqcanal.aliyun.accessKey =canal.aliyun.secretKey =canal.aliyun.uid=canal.mq.flatMessage = truecanal.mq.canalBatchSize = 50canal.mq.canalGetTimeout = 100# Set this value to "cloud", if you want open message trace feature in aliyun.canal.mq.accessChannel = localcanal.mq.database.hash = truecanal.mq.send.thread.size = 30canal.mq.build.thread.size = 8########################################################### Kafka ###############################################################kafka.bootstrap.servers = 127.0.0.1:9092kafka.acks = 1kafka.compression.type = nonekafka.batch.size = 16384kafka.linger.ms = 1kafka.max.request.size = 1048576kafka.buffer.memory = 33554432kafka.max.in.flight.requests.per.connection = 1kafka.retries = 0kafka.kerberos.enable = falsekafka.kerberos.krb5.file = ../conf/kerberos/krb5.confkafka.kerberos.jaas.file = ../conf/kerberos/jaas.conf# sasl demo# kafka.sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required \\n username=\"alice\" \\npassword="alice-secret\";# kafka.sasl.mechanism = SCRAM-SHA-512# kafka.security.protocol = SASL_PLAINTEXT########################################################### RocketMQ ###############################################################rocketmq.producer.group = testrocketmq.enable.message.trace = falserocketmq.customized.trace.topic =rocketmq.namespace =rocketmq.namesrv.addr = 127.0.0.1:9876rocketmq.retry.times.when.send.failed = 0rocketmq.vip.channel.enabled = falserocketmq.tag = ########################################################### RabbitMQ ###############################################################rabbitmq.host =rabbitmq.virtual.host =rabbitmq.exchange =rabbitmq.username =rabbitmq.password =rabbitmq.deliveryMode =########################################################### Pulsar ###############################################################pulsarmq.serverUrl =pulsarmq.roleToken =pulsarmq.topicTenantPrefix =
五、数据库相关操作
查询BinLog是否开启,如果输出结果中的Value列为ON,则表示已经开启了binlog日志。如果Value列为OFF,则表示没有开启binlog日志。
SHOW VARIABLES LIKE 'log_bin';
查询BinLog的状态,输出结果中的File列就是当前binlog日志的文件名,Position列是当前binlog日志的位置。
SHOW MASTER STATUS;
查询BinLog的模式,一般分为三种模式:row,statement,mixed。需要确认是row模式。
SHOW VARIABLES LIKE '%binlog_format%';
注意,只有MySQL服务器的超级用户才能查询binlog日志的状态和内容。
创建 canal 用户相关操作:
-- 创建canal用户CREATE USER canal IDENTIFIED BY 'canal'; -- 设置canal用户的权限GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';-- 刷新权限FLUSH PRIVILEGES;
来源地址:https://blog.csdn.net/Wjhsmart/article/details/130933707
免责声明:
① 本站未注明“稿件来源”的信息均来自网络整理。其文字、图片和音视频稿件的所属权归原作者所有。本站收集整理出于非商业性的教育和科研之目的,并不意味着本站赞同其观点或证实其内容的真实性。仅作为临时的测试数据,供内部测试之用。本站并未授权任何人以任何方式主动获取本站任何信息。
② 本站未注明“稿件来源”的临时测试数据将在测试完成后最终做删除处理。有问题或投稿请发送至: 邮箱/279061341@qq.com QQ/279061341