比赛地址:2025数字中国创新大赛数字安全赛道时空数据安全赛题暨三明市第五届“红明谷”杯大赛初赛

比赛时间:22 Mar 2025 10:00 CST - 22 Mar 2025 15:00 CST

复现的题目用🔁标注

Misc

异常行为溯源

Challenge

异常行为溯源

题目内容:

某企业网络安全部门人员正在对企业网络资产受到的攻击行为进行溯源分析,该工作人员发现攻击者删除了一段时间内的访问日志数据,但是攻击者曾传输过已被删除的访问日志数据并且被流量监控设备捕获,工作人员对流量数据进行了初步过滤并提取出了相应数据包。已知该攻击者在开始时曾尝试低密度的攻击,发现未被相关安全人员及时发现后进行了连续多日的攻击,请协助企业排查并定位攻击者IP,flag格式为:flag{md5(IP)}

附件下载 提取码(GAME)备用下载

Solution

附件是一个流量包network_traffic.pcap,先用 wireshark 打开分析一下

hmg2025-1

发现所有流量包的 Data 都有这样的 Base64 数据

hmg2025-2

进一步分析发现这里面有一段 Json 数据,并且还包含了一段 Base64 数据,于是再对这一段 Base64 解码

hmg2025-3

推测这实际上就是题目所说的访问日志数据

因此现在的目标就是先将 Data 中的数据提取出来,将其经过 Base64 解码得到 Json 数据,再提取出 Json 数据中 msg 对应的值并对其进行 Base64 解码操作,最后从这段访问日志数据中提取出开头的 IP 地址并对其进行分析

bash
tshark -r network_traffic.pcap -T fields -e _ws.col.Info > output.txt

先把 Data 中的数据提取出来放到 output.txt

text
65794a74633263694f694a50564546315431526a6455317155586c4d616b6b3054464e4264456c4763336450517a6c4c57566330646b317151586c4f56473935545770764d5535366233704f6555467954555242643031474d47644a62454a51565446525a3077794d576868567a5232597a4a5761474e74546d394d626b4a7659304e4353565a47556c464d656b563154564e4a5a3031715158644a52456c36543052425a306c704d476c4a5130704f596a4e7763474a486547684d656c563154554e4262316b794f58526a523059775956644b6331705563326455566b354b556c4e424e5578715154644a526d5277596d3153646d517a545764556246466e546d6b3065453935516c566a62577872576c63314d4578365658564e553274705132633950534973496e5235634755694f694a4d623263745247463059534a39

此时 output.txt 中有若干行像上面这样的十六进制数据,要先将它们转成字符并保存到 output_ascii.txt

python
import os input_file_path = "output.txt"output_file_path = "output_ascii.txt" try:    with open(input_file_path, "r", encoding="utf-8") as input_file, \         open(output_file_path, "w", encoding="utf-8") as output_file:         for line in input_file:            hex_data = line.strip()             try:                byte_data = bytes.fromhex(hex_data)                ascii_data = byte_data.decode("ascii")                 output_file.write(ascii_data + "\n")            except ValueError as ve:                print(f"无法解析十六进制数据: {hex_data}. 错误: {ve}")            except UnicodeDecodeError as ude:                print(f"无法将字节解码为ASCII: {hex_data}. 错误: {ude}") except Exception as e:    print(f"发生错误: {e}")

得到的 output_ascii.txt 有若干行形如下面的 Base64 数据

text
eyJtc2ciOiJPVEF1T1RjdU1qUXlMakk0TFNBdElGc3dPQzlLWVc0dk1qQXlOVG95TWpvMU56b3pOeUFyTURBd01GMGdJbEJQVTFRZ0wyMWhhVzR2YzJWaGNtTm9MbkJvY0NCSVZGUlFMekV1TVNJZ01qQXdJREl6T0RBZ0lpMGlJQ0pOYjNwcGJHeGhMelV1TUNBb1kyOXRjR0YwYVdKc1pUc2dUVk5KUlNBNUxqQTdJRmRwYm1SdmQzTWdUbFFnTmk0eE95QlVjbWxrWlc1MEx6VXVNU2tpQ2c9PSIsInR5cGUiOiJMb2ctRGF0YSJ9

接着 Base64 解码得到 Json 数据

python
import osimport base64 input_file_path = "output_ascii.txt"output_file_path = "output_json.txt" try:    with open(input_file_path, "r", encoding="utf-8") as input_file, \         open(output_file_path, "w", encoding="utf-8") as output_file:                for line in input_file:            base64_data = line.strip()                        try:                decoded_bytes = base64.b64decode(base64_data)                decoded_text = decoded_bytes.decode("utf-8")                 output_file.write(decoded_text + "\n")            except base64.binascii.Error as b64_err:                print(f"无法解码Base64数据: {base64_data}. 错误: {b64_err}")            except UnicodeDecodeError as ude:                print(f"无法将字节解码为UTF-8: {base64_data}. 错误: {ude}") except Exception as e:    print(f"发生错误: {e}")

图中出现了一些报错,具体来说是有几条数据出现了损坏,我手动把它们补全了,得到的 output_json.txt 有若干行形如下面的 Base64 数据

json
{"msg":"OTAuOTcuMjQyLjI4LSAtIFswOC9KYW4vMjAyNToyMjo1NzozNyArMDAwMF0gIlBPU1QgL21haW4vc2VhcmNoLnBocCBIVFRQLzEuMSIgMjAwIDIzODAgIi0iICJNb3ppbGxhLzUuMCAoY29tcGF0aWJsZTsgTVNJRSA5LjA7IFdpbmRvd3MgTlQgNi4xOyBUcmlkZW50LzUuMSkiCg==","type":"Log-Data"}

再读取出 msg 所对应的值并保存到 output_final.txt

python
import osimport base64import json input_file_path = "output_json.txt"output_file_path = "output_final.txt" try:    with open(input_file_path, "r", encoding="utf-8") as input_file, \         open(output_file_path, "w", encoding="utf-8") as output_file:                for line in input_file:            json_data = line.strip()             if not json_data:                continue                        try:                data = json.loads(json_data)                                # 如果解析成功,提取 msg 字段                if "msg" not in data:                    print(f"缺少 'msg' 字段: {json_data}")                    continue                                base64_msg = data["msg"]                            except json.JSONDecodeError:                # 如果解析失败,尝试修复 JSON 格式                try:                    # 修复 JSON 格式:在行前添加 {"msg":"                    repaired_json_data = '{"msg":"' + json_data                    data = json.loads(repaired_json_data)                                        # 提取 msg 字段                    base64_msg = data["msg"]                except json.JSONDecodeError as jde:                    print(f"无法解析或修复 JSON 数据: {json_data}. 错误: {jde}")                    continue                        try:                # Base64 解码 msg 字段                decoded_bytes = base64.b64decode(base64_msg)                                # 将字节对象解码为字符串(假设是 UTF-8 编码)                decoded_text = decoded_bytes.decode("utf-8")                                # 写入到输出文件                output_file.write(decoded_text + "\n")            except base64.binascii.Error as b64_err:                print(f"无法解码 Base64 数据: {base64_msg}. 错误: {b64_err}")            except UnicodeDecodeError as ude:                print(f"无法将字节解码为 UTF-8: {base64_msg}. 错误: {ude}") except Exception as e:    print(f"发生错误: {e}")

虽然这次也有一些报错但我懒得管了,因为数据量足够大,少了这几条不会对统计的结果造成影响

最后再将这些 IP 地址提取出来,按照出现次数从高到低排序

python
import refrom collections import Counter input_file_path = "output_final.txt"output_file_path = "sorted_ips_by_count.txt" # 匹配 IP 地址ip_pattern = re.compile(r"\b(?:\d{1,3}\.){3}\d{1,3}\b") ip_counter = Counter() try:    with open(input_file_path, "r", encoding="utf-8") as input_file:        for line in input_file:            log_line = line.strip()            if not log_line:                continue                        # 提取 IP 地址            match = ip_pattern.search(log_line)            if match:                ip_address = match.group(0)                ip_counter[ip_address] += 1     sorted_ips = sorted(ip_counter.items(), key=lambda x: x[1], reverse=True)     with open(output_file_path, "w", encoding="utf-8") as output_file:        for ip, count in sorted_ips:            output_file.write(f"{ip} ({count} times)\n") except Exception as e:    print(f"发生错误: {e}")

得到了一个经过排序的 sorted_ips_by_count.txt,前面几条内容如下

text
35.127.46.111 (67 times)199.250.217.171 (6 times)181.73.51.10 (5 times)159.242.64.164 (5 times)196.133.63.166 (5 times)

35.127.46.111 的出现次数远超其他的出现次数, 显然答案就是 35.127.46.111

hmg2025-4

flag
flag{475ed6d7f74f586fb265f52eb42039b6}