test: Phase 2-4 — 添加 12 个测试文件 (+321 tests, 968 total)
Phase 2 (轻 Mock): envUtils, sleep/sequential, memoize, groupToolUses, dangerousPatterns, outputLimits Phase 3 (补全): zodToJsonSchema, PermissionMode, envValidation Phase 4 (工具模块): mcpStringUtils, destructiveCommandWarning, commandSemantics Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
parent
2d9c2adce3
commit
21ac9e441f
@ -300,7 +300,7 @@ bun test --watch
|
||||
|
||||
## 11. 当前测试覆盖状态
|
||||
|
||||
> 更新日期:2026-04-02 | 总计:**647 tests, 32 files, 0 failures**
|
||||
> 更新日期:2026-04-02 | 总计:**968 tests, 52 files, 0 failures**
|
||||
|
||||
### P0 — 核心模块
|
||||
|
||||
@ -348,6 +348,41 @@ bun test --watch
|
||||
| 08 - Git 工具 | `src/utils/__tests__/git.test.ts` | 18 | normalizeGitRemoteUrl (SSH/HTTPS/ssh:///代理URL/大小写规范化) |
|
||||
| 09 - 配置与设置 | `src/utils/settings/__tests__/config.test.ts` | 62 | SettingsSchema, PermissionsSchema, AllowedMcpServerEntrySchema, MCP 类型守卫, 设置常量函数, filterInvalidPermissionRules, validateSettingsFileContent, formatZodError |
|
||||
|
||||
### P3 — Phase 1 纯函数扩展
|
||||
|
||||
| 测试文件 | 测试数 | 覆盖范围 |
|
||||
|----------|--------|----------|
|
||||
| `src/utils/__tests__/errors.test.ts` | 28 | ClaudeError, AbortError, ConfigParseError, ShellError, TelemetrySafeError, isAbortError, hasExactErrorMessage, toError, errorMessage, getErrnoCode, isENOENT, getErrnoPath, shortErrorStack, isFsInaccessible, classifyAxiosError |
|
||||
| `src/utils/permissions/__tests__/shellRuleMatching.test.ts` | 22 | permissionRuleExtractPrefix, hasWildcards, matchWildcardPattern, parsePermissionRule, suggestionForExactCommand, suggestionForPrefix |
|
||||
| `src/utils/__tests__/argumentSubstitution.test.ts` | 18 | parseArguments, parseArgumentNames, generateProgressiveArgumentHint, substituteArguments |
|
||||
| `src/utils/__tests__/CircularBuffer.test.ts` | 12 | CircularBuffer class: add, addAll, getRecent, toArray, clear, length |
|
||||
| `src/utils/__tests__/sanitization.test.ts` | 14 | partiallySanitizeUnicode, recursivelySanitizeUnicode |
|
||||
| `src/utils/__tests__/slashCommandParsing.test.ts` | 8 | parseSlashCommand |
|
||||
| `src/utils/__tests__/contentArray.test.ts` | 6 | insertBlockAfterToolResults |
|
||||
| `src/utils/__tests__/objectGroupBy.test.ts` | 5 | objectGroupBy |
|
||||
|
||||
### P4 — Phase 2 轻 Mock 扩展
|
||||
|
||||
| 测试文件 | 测试数 | 覆盖范围 |
|
||||
|----------|--------|----------|
|
||||
| `src/utils/__tests__/envUtils.test.ts` | 34 | isEnvTruthy, isEnvDefinedFalsy, parseEnvVars, hasNodeOption, getAWSRegion, getDefaultVertexRegion, getVertexRegionForModel, isBareMode, shouldMaintainProjectWorkingDir, getClaudeConfigHomeDir |
|
||||
| `src/utils/__tests__/sleep.test.ts` | 14 | sleep (abort, throwOnAbort, abortError), withTimeout, sequential |
|
||||
| `src/utils/__tests__/memoize.test.ts` | 16 | memoizeWithTTL, memoizeWithTTLAsync (dedup/cache/clear), memoizeWithLRU (eviction/cache methods) |
|
||||
| `src/utils/__tests__/groupToolUses.test.ts` | 10 | applyGrouping (verbose, grouping, result collection, mixed messages) |
|
||||
| `src/utils/permissions/__tests__/dangerousPatterns.test.ts` | 7 | CROSS_PLATFORM_CODE_EXEC, DANGEROUS_BASH_PATTERNS 常量验证 |
|
||||
| `src/utils/shell/__tests__/outputLimits.test.ts` | 7 | getMaxOutputLength, BASH_MAX_OUTPUT_UPPER_LIMIT, BASH_MAX_OUTPUT_DEFAULT |
|
||||
|
||||
### P5 — Phase 3 补全 + Phase 4 工具模块
|
||||
|
||||
| 测试文件 | 测试数 | 覆盖范围 |
|
||||
|----------|--------|----------|
|
||||
| `src/utils/__tests__/zodToJsonSchema.test.ts` | 9 | zodToJsonSchema (string/number/object/enum/optional/array/boolean + caching) |
|
||||
| `src/utils/permissions/__tests__/PermissionMode.test.ts` | 19 | PERMISSION_MODES, permissionModeFromString, permissionModeTitle, permissionModeShortTitle, permissionModeSymbol, getModeColor, isDefaultMode, toExternalPermissionMode, isExternalPermissionMode |
|
||||
| `src/utils/__tests__/envValidation.test.ts` | 9 | validateBoundedIntEnvVar (default/valid/capped/invalid/boundary) |
|
||||
| `src/services/mcp/__tests__/mcpStringUtils.test.ts` | 18 | mcpInfoFromString, getMcpPrefix, buildMcpToolName, getMcpDisplayName, getToolNameForPermissionCheck, extractMcpToolDisplayName |
|
||||
| `src/tools/BashTool/__tests__/destructiveCommandWarning.test.ts` | 22 | getDestructiveCommandWarning (git/rm/database/infrastructure patterns) |
|
||||
| `src/tools/BashTool/__tests__/commandSemantics.test.ts` | 11 | interpretCommandResult (grep/diff/test/rg/find exit code semantics) |
|
||||
|
||||
### 已知限制
|
||||
|
||||
以下模块因 Bun 运行时限制或极重依赖链,暂时无法或不适合测试:
|
||||
@ -365,55 +400,20 @@ bun test --watch
|
||||
|
||||
| 被 Mock 模块 | 解锁的测试 |
|
||||
|-------------|-----------|
|
||||
| `src/utils/log.ts` | json.ts, tokens.ts, FileEditTool/utils.ts, permissions.ts |
|
||||
| `src/utils/log.ts` | json.ts, tokens.ts, FileEditTool/utils.ts, permissions.ts, memoize.ts, PermissionMode.ts |
|
||||
| `src/services/tokenEstimation.ts` | tokens.ts |
|
||||
| `src/utils/slowOperations.ts` | tokens.ts, permissions.ts |
|
||||
| `src/utils/slowOperations.ts` | tokens.ts, permissions.ts, memoize.ts, PermissionMode.ts |
|
||||
| `src/utils/debug.ts` | envValidation.ts, outputLimits.ts |
|
||||
| `src/utils/bash/commands.ts` | commandSemantics.ts |
|
||||
|
||||
**关键约束**:`mock.module()` 必须在每个测试文件中内联调用,不能从共享 helper 导入(Bun 在 mock 生效前就解析了 helper 的导入)。
|
||||
|
||||
## 12. 后续测试覆盖计划
|
||||
|
||||
> 目标:再增加 ~200 tests,从 647 → ~860 tests / 52 files
|
||||
|
||||
### Phase 1:纯函数(零依赖,~98 tests,8 files)
|
||||
|
||||
| 测试文件 | 源文件 | 关键函数 | 预估 |
|
||||
|----------|--------|----------|------|
|
||||
| `errors.test.ts` | `src/utils/errors.ts` | `isAbortError`, `toError`, `errorMessage`, `getErrnoCode`, `isENOENT`, `isFsInaccessible`, `classifyAxiosError` + Error classes | 20 |
|
||||
| `shellRuleMatching.test.ts` | `src/utils/permissions/shellRuleMatching.ts` | `permissionRuleExtractPrefix`, `hasWildcards`, `matchWildcardPattern`, `parsePermissionRule`, `suggestionForExactCommand` | 20 |
|
||||
| `argumentSubstitution.test.ts` | `src/utils/argumentSubstitution.ts` | `parseArguments`, `parseArgumentNames`, `generateProgressiveArgumentHint`, `substituteArguments` | 15 |
|
||||
| `CircularBuffer.test.ts` | `src/utils/CircularBuffer.ts` | `CircularBuffer` class 全部方法 | 12 |
|
||||
| `sanitization.test.ts` | `src/utils/sanitization.ts` | `partiallySanitizeUnicode`, `recursivelySanitizeUnicode` | 10 |
|
||||
| `slashCommandParsing.test.ts` | `src/utils/slashCommandParsing.ts` | `parseSlashCommand` | 8 |
|
||||
| `contentArray.test.ts` | `src/utils/contentArray.ts` | `insertBlockAfterToolResults` | 8 |
|
||||
| `objectGroupBy.test.ts` | `src/utils/objectGroupBy.ts` | `objectGroupBy` | 5 |
|
||||
|
||||
### Phase 2:轻 Mock(mock log.ts / env,~63 tests,6 files)
|
||||
|
||||
| 测试文件 | 源文件 | Mock 策略 | 预估 |
|
||||
|----------|--------|-----------|------|
|
||||
| `envUtils.test.ts` | `src/utils/envUtils.ts` | 临时修改 `process.env` | 15 |
|
||||
| `sleep.test.ts` | `src/utils/sleep.ts` + `sequential.ts` | AbortController | 14 |
|
||||
| `memoize.test.ts` | `src/utils/memoize.ts` | mock `log.ts` + `slowOperations.ts` | 12 |
|
||||
| `groupToolUses.test.ts` | `src/utils/groupToolUses.ts` | 构造 mock message/tool 对象 | 12 |
|
||||
| `dangerousPatterns.test.ts` | `src/utils/permissions/dangerousPatterns.ts` | 无(常量导出) | 5 |
|
||||
| `outputLimits.test.ts` | `src/utils/shell/outputLimits.ts` | 临时修改 `process.env` | 5 |
|
||||
|
||||
### Phase 3:补全现有计划缺口(~20 tests,3 files)
|
||||
|
||||
| 测试文件 | 源文件 | Mock 策略 | 预估 |
|
||||
|----------|--------|-----------|------|
|
||||
| `context.test.ts` | `src/context.ts` | mock `execFileNoThrow`, `log.ts` | 10 |
|
||||
| `zodToJsonSchema.test.ts` | `src/utils/zodToJsonSchema.ts` | 无(仅依赖 zod) | 5 |
|
||||
| `PermissionMode.test.ts` | `src/utils/permissions/PermissionMode.ts` | 视导出情况 | 5 |
|
||||
|
||||
### Phase 4:工具模块扩展(~30 tests,3 files)
|
||||
|
||||
| 测试文件 | 源文件 | 预估 |
|
||||
|----------|--------|------|
|
||||
| `bashPermissions.test.ts` | `src/tools/BashTool/` | 10 |
|
||||
| `GlobTool.test.ts` | `src/tools/GlobTool/` | 10 |
|
||||
| `mcpStringUtils.test.ts` | `src/services/mcp/mcpStringUtils.ts` | 10 |
|
||||
> **已完成** — 实际增加 321 tests,从 647 → 968 tests / 52 files
|
||||
>
|
||||
> Phase 1-4 全部完成,详见上方 P3-P5 表格。
|
||||
> 实际调整:Phase 3 中 `context.ts` 因极重依赖链(bootstrap/state + claudemd + git 等)且 `getGitStatus` 在 test 环境直接返回 null,替换为 `envValidation.ts`(更实用);Phase 4 中 GlobTool 纯函数不足,替换为 `commandSemantics.ts` + `destructiveCommandWarning.ts`。
|
||||
|
||||
### 不纳入计划的模块
|
||||
|
||||
|
||||
140
src/services/mcp/__tests__/mcpStringUtils.test.ts
Normal file
140
src/services/mcp/__tests__/mcpStringUtils.test.ts
Normal file
@ -0,0 +1,140 @@
|
||||
import { describe, expect, test } from "bun:test";
|
||||
import {
|
||||
mcpInfoFromString,
|
||||
buildMcpToolName,
|
||||
getMcpPrefix,
|
||||
getMcpDisplayName,
|
||||
getToolNameForPermissionCheck,
|
||||
extractMcpToolDisplayName,
|
||||
} from "../mcpStringUtils";
|
||||
|
||||
// ─── mcpInfoFromString ─────────────────────────────────────────────────
|
||||
|
||||
describe("mcpInfoFromString", () => {
|
||||
test("parses standard mcp tool name", () => {
|
||||
const result = mcpInfoFromString("mcp__github__list_issues");
|
||||
expect(result).toEqual({ serverName: "github", toolName: "list_issues" });
|
||||
});
|
||||
|
||||
test("returns null for non-mcp string", () => {
|
||||
expect(mcpInfoFromString("Bash")).toBeNull();
|
||||
expect(mcpInfoFromString("grep__pattern")).toBeNull();
|
||||
});
|
||||
|
||||
test("returns null when no server name", () => {
|
||||
expect(mcpInfoFromString("mcp__")).toBeNull();
|
||||
});
|
||||
|
||||
test("handles server name only (no tool)", () => {
|
||||
const result = mcpInfoFromString("mcp__server");
|
||||
expect(result).toEqual({ serverName: "server", toolName: undefined });
|
||||
});
|
||||
|
||||
test("preserves double underscores in tool name", () => {
|
||||
const result = mcpInfoFromString("mcp__server__tool__with__underscores");
|
||||
expect(result).toEqual({
|
||||
serverName: "server",
|
||||
toolName: "tool__with__underscores",
|
||||
});
|
||||
});
|
||||
|
||||
test("returns null for empty string", () => {
|
||||
expect(mcpInfoFromString("")).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
// ─── getMcpPrefix ──────────────────────────────────────────────────────
|
||||
|
||||
describe("getMcpPrefix", () => {
|
||||
test("creates prefix from server name", () => {
|
||||
expect(getMcpPrefix("github")).toBe("mcp__github__");
|
||||
});
|
||||
|
||||
test("normalizes server name with special chars", () => {
|
||||
expect(getMcpPrefix("my-server")).toBe("mcp__my-server__");
|
||||
});
|
||||
|
||||
test("normalizes dots to underscores", () => {
|
||||
expect(getMcpPrefix("my.server")).toBe("mcp__my_server__");
|
||||
});
|
||||
});
|
||||
|
||||
// ─── buildMcpToolName ──────────────────────────────────────────────────
|
||||
|
||||
describe("buildMcpToolName", () => {
|
||||
test("builds fully qualified name", () => {
|
||||
expect(buildMcpToolName("github", "list_issues")).toBe(
|
||||
"mcp__github__list_issues"
|
||||
);
|
||||
});
|
||||
|
||||
test("normalizes both server and tool names", () => {
|
||||
expect(buildMcpToolName("my.server", "my.tool")).toBe(
|
||||
"mcp__my_server__my_tool"
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── getMcpDisplayName ─────────────────────────────────────────────────
|
||||
|
||||
describe("getMcpDisplayName", () => {
|
||||
test("strips mcp prefix from full name", () => {
|
||||
expect(getMcpDisplayName("mcp__github__list_issues", "github")).toBe(
|
||||
"list_issues"
|
||||
);
|
||||
});
|
||||
|
||||
test("returns full name if prefix doesn't match", () => {
|
||||
expect(getMcpDisplayName("mcp__other__tool", "github")).toBe(
|
||||
"mcp__other__tool"
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── getToolNameForPermissionCheck ─────────────────────────────────────
|
||||
|
||||
describe("getToolNameForPermissionCheck", () => {
|
||||
test("returns built MCP name for MCP tools", () => {
|
||||
const tool = {
|
||||
name: "list_issues",
|
||||
mcpInfo: { serverName: "github", toolName: "list_issues" },
|
||||
};
|
||||
expect(getToolNameForPermissionCheck(tool)).toBe(
|
||||
"mcp__github__list_issues"
|
||||
);
|
||||
});
|
||||
|
||||
test("returns tool name for non-MCP tools", () => {
|
||||
const tool = { name: "Bash" };
|
||||
expect(getToolNameForPermissionCheck(tool)).toBe("Bash");
|
||||
});
|
||||
|
||||
test("returns tool name when mcpInfo is undefined", () => {
|
||||
const tool = { name: "Write", mcpInfo: undefined };
|
||||
expect(getToolNameForPermissionCheck(tool)).toBe("Write");
|
||||
});
|
||||
});
|
||||
|
||||
// ─── extractMcpToolDisplayName ─────────────────────────────────────────
|
||||
|
||||
describe("extractMcpToolDisplayName", () => {
|
||||
test("extracts display name from full user-facing name", () => {
|
||||
expect(
|
||||
extractMcpToolDisplayName("github - Add comment to issue (MCP)")
|
||||
).toBe("Add comment to issue");
|
||||
});
|
||||
|
||||
test("removes (MCP) suffix only", () => {
|
||||
expect(extractMcpToolDisplayName("simple-tool (MCP)")).toBe("simple-tool");
|
||||
});
|
||||
|
||||
test("handles name without (MCP) suffix", () => {
|
||||
expect(extractMcpToolDisplayName("github - List issues")).toBe(
|
||||
"List issues"
|
||||
);
|
||||
});
|
||||
|
||||
test("handles name without dash separator", () => {
|
||||
expect(extractMcpToolDisplayName("just-a-name")).toBe("just-a-name");
|
||||
});
|
||||
});
|
||||
87
src/tools/BashTool/__tests__/commandSemantics.test.ts
Normal file
87
src/tools/BashTool/__tests__/commandSemantics.test.ts
Normal file
@ -0,0 +1,87 @@
|
||||
import { mock, describe, expect, test } from "bun:test";
|
||||
|
||||
// Mock commands.ts to cut the heavy shell/prefix.ts → analytics → api chain
|
||||
mock.module("src/utils/bash/commands.ts", () => ({
|
||||
splitCommand_DEPRECATED: (cmd: string) =>
|
||||
cmd.split(/\s*(?:[|;&]+)\s*/).filter(Boolean),
|
||||
quote: (args: string[]) => args.join(" "),
|
||||
}));
|
||||
|
||||
const { interpretCommandResult } = await import("../commandSemantics");
|
||||
|
||||
describe("interpretCommandResult", () => {
|
||||
// ─── Default semantics ────────────────────────────────────────────
|
||||
test("exit 0 is not an error for unknown commands", () => {
|
||||
const result = interpretCommandResult("echo hello", 0, "hello", "");
|
||||
expect(result.isError).toBe(false);
|
||||
});
|
||||
|
||||
test("non-zero exit is an error for unknown commands", () => {
|
||||
const result = interpretCommandResult("echo hello", 1, "", "fail");
|
||||
expect(result.isError).toBe(true);
|
||||
expect(result.message).toContain("exit code 1");
|
||||
});
|
||||
|
||||
// ─── grep semantics ──────────────────────────────────────────────
|
||||
test("grep exit 0 is not an error", () => {
|
||||
const result = interpretCommandResult("grep pattern file", 0, "match", "");
|
||||
expect(result.isError).toBe(false);
|
||||
});
|
||||
|
||||
test("grep exit 1 means no matches (not error)", () => {
|
||||
const result = interpretCommandResult("grep pattern file", 1, "", "");
|
||||
expect(result.isError).toBe(false);
|
||||
expect(result.message).toBe("No matches found");
|
||||
});
|
||||
|
||||
test("grep exit 2 is an error", () => {
|
||||
const result = interpretCommandResult("grep pattern file", 2, "", "err");
|
||||
expect(result.isError).toBe(true);
|
||||
});
|
||||
|
||||
// ─── diff semantics ──────────────────────────────────────────────
|
||||
test("diff exit 1 means files differ (not error)", () => {
|
||||
const result = interpretCommandResult("diff a.txt b.txt", 1, "diff", "");
|
||||
expect(result.isError).toBe(false);
|
||||
expect(result.message).toBe("Files differ");
|
||||
});
|
||||
|
||||
test("diff exit 2 is an error", () => {
|
||||
const result = interpretCommandResult("diff a.txt b.txt", 2, "", "err");
|
||||
expect(result.isError).toBe(true);
|
||||
});
|
||||
|
||||
// ─── test/[ semantics ────────────────────────────────────────────
|
||||
test("test exit 1 means condition false (not error)", () => {
|
||||
const result = interpretCommandResult("test -f nofile", 1, "", "");
|
||||
expect(result.isError).toBe(false);
|
||||
expect(result.message).toBe("Condition is false");
|
||||
});
|
||||
|
||||
// ─── piped commands ──────────────────────────────────────────────
|
||||
test("uses last command in pipe for semantics", () => {
|
||||
// "cat file | grep pattern" → last command is "grep pattern"
|
||||
const result = interpretCommandResult(
|
||||
"cat file | grep pattern",
|
||||
1,
|
||||
"",
|
||||
""
|
||||
);
|
||||
expect(result.isError).toBe(false);
|
||||
expect(result.message).toBe("No matches found");
|
||||
});
|
||||
|
||||
// ─── rg (ripgrep) semantics ──────────────────────────────────────
|
||||
test("rg exit 1 means no matches (not error)", () => {
|
||||
const result = interpretCommandResult("rg pattern", 1, "", "");
|
||||
expect(result.isError).toBe(false);
|
||||
expect(result.message).toBe("No matches found");
|
||||
});
|
||||
|
||||
// ─── find semantics ──────────────────────────────────────────────
|
||||
test("find exit 1 is partial success", () => {
|
||||
const result = interpretCommandResult("find . -name '*.ts'", 1, "", "");
|
||||
expect(result.isError).toBe(false);
|
||||
expect(result.message).toBe("Some directories were inaccessible");
|
||||
});
|
||||
});
|
||||
112
src/tools/BashTool/__tests__/destructiveCommandWarning.test.ts
Normal file
112
src/tools/BashTool/__tests__/destructiveCommandWarning.test.ts
Normal file
@ -0,0 +1,112 @@
|
||||
import { describe, expect, test } from "bun:test";
|
||||
import { getDestructiveCommandWarning } from "../destructiveCommandWarning";
|
||||
|
||||
describe("getDestructiveCommandWarning", () => {
|
||||
// ─── Git data loss ─────────────────────────────────────────────────
|
||||
test("detects git reset --hard", () => {
|
||||
const w = getDestructiveCommandWarning("git reset --hard HEAD~1");
|
||||
expect(w).toContain("discard uncommitted changes");
|
||||
});
|
||||
|
||||
test("detects git push --force", () => {
|
||||
const w = getDestructiveCommandWarning("git push --force origin main");
|
||||
expect(w).toContain("overwrite remote history");
|
||||
});
|
||||
|
||||
test("detects git push -f", () => {
|
||||
expect(getDestructiveCommandWarning("git push -f")).toContain(
|
||||
"overwrite remote history"
|
||||
);
|
||||
});
|
||||
|
||||
test("detects git clean -f", () => {
|
||||
const w = getDestructiveCommandWarning("git clean -fd");
|
||||
expect(w).toContain("delete untracked files");
|
||||
});
|
||||
|
||||
test("does not flag git clean --dry-run", () => {
|
||||
expect(getDestructiveCommandWarning("git clean -fdn")).toBeNull();
|
||||
});
|
||||
|
||||
test("detects git checkout .", () => {
|
||||
const w = getDestructiveCommandWarning("git checkout -- .");
|
||||
expect(w).toContain("discard all working tree changes");
|
||||
});
|
||||
|
||||
test("detects git restore .", () => {
|
||||
const w = getDestructiveCommandWarning("git restore -- .");
|
||||
expect(w).toContain("discard all working tree changes");
|
||||
});
|
||||
|
||||
test("detects git stash drop", () => {
|
||||
const w = getDestructiveCommandWarning("git stash drop");
|
||||
expect(w).toContain("remove stashed changes");
|
||||
});
|
||||
|
||||
test("detects git branch -D", () => {
|
||||
const w = getDestructiveCommandWarning("git branch -D feature");
|
||||
expect(w).toContain("force-delete a branch");
|
||||
});
|
||||
|
||||
// ─── Git safety bypass ────────────────────────────────────────────
|
||||
test("detects --no-verify", () => {
|
||||
const w = getDestructiveCommandWarning("git commit --no-verify -m 'x'");
|
||||
expect(w).toContain("skip safety hooks");
|
||||
});
|
||||
|
||||
test("detects git commit --amend", () => {
|
||||
const w = getDestructiveCommandWarning("git commit --amend");
|
||||
expect(w).toContain("rewrite the last commit");
|
||||
});
|
||||
|
||||
// ─── File deletion ────────────────────────────────────────────────
|
||||
test("detects rm -rf", () => {
|
||||
const w = getDestructiveCommandWarning("rm -rf /tmp/dir");
|
||||
expect(w).toContain("recursively force-remove");
|
||||
});
|
||||
|
||||
test("detects rm -r", () => {
|
||||
const w = getDestructiveCommandWarning("rm -r dir");
|
||||
expect(w).toContain("recursively remove");
|
||||
});
|
||||
|
||||
test("detects rm -f", () => {
|
||||
const w = getDestructiveCommandWarning("rm -f file.txt");
|
||||
expect(w).toContain("force-remove");
|
||||
});
|
||||
|
||||
// ─── Database ─────────────────────────────────────────────────────
|
||||
test("detects DROP TABLE", () => {
|
||||
const w = getDestructiveCommandWarning("psql -c 'DROP TABLE users'");
|
||||
expect(w).toContain("drop or truncate");
|
||||
});
|
||||
|
||||
test("detects TRUNCATE TABLE", () => {
|
||||
const w = getDestructiveCommandWarning("TRUNCATE TABLE logs");
|
||||
expect(w).toContain("drop or truncate");
|
||||
});
|
||||
|
||||
test("detects DELETE FROM without WHERE", () => {
|
||||
const w = getDestructiveCommandWarning("DELETE FROM users;");
|
||||
expect(w).toContain("delete all rows");
|
||||
});
|
||||
|
||||
// ─── Infrastructure ───────────────────────────────────────────────
|
||||
test("detects kubectl delete", () => {
|
||||
const w = getDestructiveCommandWarning("kubectl delete pod my-pod");
|
||||
expect(w).toContain("delete Kubernetes");
|
||||
});
|
||||
|
||||
test("detects terraform destroy", () => {
|
||||
const w = getDestructiveCommandWarning("terraform destroy");
|
||||
expect(w).toContain("destroy Terraform");
|
||||
});
|
||||
|
||||
// ─── Safe commands ────────────────────────────────────────────────
|
||||
test("returns null for safe commands", () => {
|
||||
expect(getDestructiveCommandWarning("ls -la")).toBeNull();
|
||||
expect(getDestructiveCommandWarning("git status")).toBeNull();
|
||||
expect(getDestructiveCommandWarning("npm install")).toBeNull();
|
||||
expect(getDestructiveCommandWarning("cat file.txt")).toBeNull();
|
||||
});
|
||||
});
|
||||
333
src/utils/__tests__/envUtils.test.ts
Normal file
333
src/utils/__tests__/envUtils.test.ts
Normal file
@ -0,0 +1,333 @@
|
||||
import { describe, expect, test, beforeEach, afterEach } from "bun:test";
|
||||
import {
|
||||
isEnvTruthy,
|
||||
isEnvDefinedFalsy,
|
||||
parseEnvVars,
|
||||
hasNodeOption,
|
||||
getAWSRegion,
|
||||
getDefaultVertexRegion,
|
||||
getVertexRegionForModel,
|
||||
isBareMode,
|
||||
shouldMaintainProjectWorkingDir,
|
||||
getClaudeConfigHomeDir,
|
||||
} from "../envUtils";
|
||||
|
||||
// ─── isEnvTruthy ───────────────────────────────────────────────────────
|
||||
|
||||
describe("isEnvTruthy", () => {
|
||||
test("returns true for '1'", () => {
|
||||
expect(isEnvTruthy("1")).toBe(true);
|
||||
});
|
||||
|
||||
test("returns true for 'true'", () => {
|
||||
expect(isEnvTruthy("true")).toBe(true);
|
||||
});
|
||||
|
||||
test("returns true for 'TRUE'", () => {
|
||||
expect(isEnvTruthy("TRUE")).toBe(true);
|
||||
});
|
||||
|
||||
test("returns true for 'yes'", () => {
|
||||
expect(isEnvTruthy("yes")).toBe(true);
|
||||
});
|
||||
|
||||
test("returns true for 'on'", () => {
|
||||
expect(isEnvTruthy("on")).toBe(true);
|
||||
});
|
||||
|
||||
test("returns true for boolean true", () => {
|
||||
expect(isEnvTruthy(true)).toBe(true);
|
||||
});
|
||||
|
||||
test("returns false for '0'", () => {
|
||||
expect(isEnvTruthy("0")).toBe(false);
|
||||
});
|
||||
|
||||
test("returns false for 'false'", () => {
|
||||
expect(isEnvTruthy("false")).toBe(false);
|
||||
});
|
||||
|
||||
test("returns false for empty string", () => {
|
||||
expect(isEnvTruthy("")).toBe(false);
|
||||
});
|
||||
|
||||
test("returns false for undefined", () => {
|
||||
expect(isEnvTruthy(undefined)).toBe(false);
|
||||
});
|
||||
|
||||
test("returns false for boolean false", () => {
|
||||
expect(isEnvTruthy(false)).toBe(false);
|
||||
});
|
||||
|
||||
test("returns true for ' true ' (trimmed)", () => {
|
||||
expect(isEnvTruthy(" true ")).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── isEnvDefinedFalsy ─────────────────────────────────────────────────
|
||||
|
||||
describe("isEnvDefinedFalsy", () => {
|
||||
test("returns true for '0'", () => {
|
||||
expect(isEnvDefinedFalsy("0")).toBe(true);
|
||||
});
|
||||
|
||||
test("returns true for 'false'", () => {
|
||||
expect(isEnvDefinedFalsy("false")).toBe(true);
|
||||
});
|
||||
|
||||
test("returns true for 'no'", () => {
|
||||
expect(isEnvDefinedFalsy("no")).toBe(true);
|
||||
});
|
||||
|
||||
test("returns true for 'off'", () => {
|
||||
expect(isEnvDefinedFalsy("off")).toBe(true);
|
||||
});
|
||||
|
||||
test("returns true for boolean false", () => {
|
||||
expect(isEnvDefinedFalsy(false)).toBe(true);
|
||||
});
|
||||
|
||||
test("returns false for undefined", () => {
|
||||
expect(isEnvDefinedFalsy(undefined)).toBe(false);
|
||||
});
|
||||
|
||||
test("returns false for '1'", () => {
|
||||
expect(isEnvDefinedFalsy("1")).toBe(false);
|
||||
});
|
||||
|
||||
test("returns false for 'true'", () => {
|
||||
expect(isEnvDefinedFalsy("true")).toBe(false);
|
||||
});
|
||||
|
||||
test("returns false for empty string", () => {
|
||||
expect(isEnvDefinedFalsy("")).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── parseEnvVars ──────────────────────────────────────────────────────
|
||||
|
||||
describe("parseEnvVars", () => {
|
||||
test("parses KEY=VALUE pairs", () => {
|
||||
const result = parseEnvVars(["FOO=bar", "BAZ=qux"]);
|
||||
expect(result).toEqual({ FOO: "bar", BAZ: "qux" });
|
||||
});
|
||||
|
||||
test("handles value with equals sign", () => {
|
||||
const result = parseEnvVars(["URL=http://host?a=1&b=2"]);
|
||||
expect(result).toEqual({ URL: "http://host?a=1&b=2" });
|
||||
});
|
||||
|
||||
test("returns empty object for undefined", () => {
|
||||
expect(parseEnvVars(undefined)).toEqual({});
|
||||
});
|
||||
|
||||
test("returns empty object for empty array", () => {
|
||||
expect(parseEnvVars([])).toEqual({});
|
||||
});
|
||||
|
||||
test("throws for missing value", () => {
|
||||
expect(() => parseEnvVars(["NOVALUE"])).toThrow("Invalid environment variable format");
|
||||
});
|
||||
|
||||
test("throws for empty key", () => {
|
||||
expect(() => parseEnvVars(["=value"])).toThrow("Invalid environment variable format");
|
||||
});
|
||||
});
|
||||
|
||||
// ─── hasNodeOption ─────────────────────────────────────────────────────
|
||||
|
||||
describe("hasNodeOption", () => {
|
||||
const saved = process.env.NODE_OPTIONS;
|
||||
afterEach(() => {
|
||||
if (saved === undefined) delete process.env.NODE_OPTIONS;
|
||||
else process.env.NODE_OPTIONS = saved;
|
||||
});
|
||||
|
||||
test("returns true when flag present", () => {
|
||||
process.env.NODE_OPTIONS = "--max-old-space-size=4096 --inspect";
|
||||
expect(hasNodeOption("--inspect")).toBe(true);
|
||||
});
|
||||
|
||||
test("returns false when flag absent", () => {
|
||||
process.env.NODE_OPTIONS = "--max-old-space-size=4096";
|
||||
expect(hasNodeOption("--inspect")).toBe(false);
|
||||
});
|
||||
|
||||
test("returns false when NODE_OPTIONS not set", () => {
|
||||
delete process.env.NODE_OPTIONS;
|
||||
expect(hasNodeOption("--inspect")).toBe(false);
|
||||
});
|
||||
|
||||
test("does not match partial flags", () => {
|
||||
process.env.NODE_OPTIONS = "--inspect-brk";
|
||||
expect(hasNodeOption("--inspect")).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── getAWSRegion ──────────────────────────────────────────────────────
|
||||
|
||||
describe("getAWSRegion", () => {
|
||||
const savedRegion = process.env.AWS_REGION;
|
||||
const savedDefault = process.env.AWS_DEFAULT_REGION;
|
||||
|
||||
afterEach(() => {
|
||||
if (savedRegion === undefined) delete process.env.AWS_REGION;
|
||||
else process.env.AWS_REGION = savedRegion;
|
||||
if (savedDefault === undefined) delete process.env.AWS_DEFAULT_REGION;
|
||||
else process.env.AWS_DEFAULT_REGION = savedDefault;
|
||||
});
|
||||
|
||||
test("uses AWS_REGION when set", () => {
|
||||
process.env.AWS_REGION = "eu-west-1";
|
||||
expect(getAWSRegion()).toBe("eu-west-1");
|
||||
});
|
||||
|
||||
test("falls back to AWS_DEFAULT_REGION", () => {
|
||||
delete process.env.AWS_REGION;
|
||||
process.env.AWS_DEFAULT_REGION = "ap-northeast-1";
|
||||
expect(getAWSRegion()).toBe("ap-northeast-1");
|
||||
});
|
||||
|
||||
test("defaults to us-east-1", () => {
|
||||
delete process.env.AWS_REGION;
|
||||
delete process.env.AWS_DEFAULT_REGION;
|
||||
expect(getAWSRegion()).toBe("us-east-1");
|
||||
});
|
||||
});
|
||||
|
||||
// ─── getDefaultVertexRegion ────────────────────────────────────────────
|
||||
|
||||
describe("getDefaultVertexRegion", () => {
|
||||
const saved = process.env.CLOUD_ML_REGION;
|
||||
afterEach(() => {
|
||||
if (saved === undefined) delete process.env.CLOUD_ML_REGION;
|
||||
else process.env.CLOUD_ML_REGION = saved;
|
||||
});
|
||||
|
||||
test("uses CLOUD_ML_REGION when set", () => {
|
||||
process.env.CLOUD_ML_REGION = "europe-west4";
|
||||
expect(getDefaultVertexRegion()).toBe("europe-west4");
|
||||
});
|
||||
|
||||
test("defaults to us-east5", () => {
|
||||
delete process.env.CLOUD_ML_REGION;
|
||||
expect(getDefaultVertexRegion()).toBe("us-east5");
|
||||
});
|
||||
});
|
||||
|
||||
// ─── getVertexRegionForModel ───────────────────────────────────────────
|
||||
|
||||
describe("getVertexRegionForModel", () => {
|
||||
const envKeys = [
|
||||
"VERTEX_REGION_CLAUDE_HAIKU_4_5",
|
||||
"VERTEX_REGION_CLAUDE_4_0_SONNET",
|
||||
"VERTEX_REGION_CLAUDE_4_6_SONNET",
|
||||
"CLOUD_ML_REGION",
|
||||
];
|
||||
const saved: Record<string, string | undefined> = {};
|
||||
|
||||
beforeEach(() => {
|
||||
for (const k of envKeys) saved[k] = process.env[k];
|
||||
});
|
||||
afterEach(() => {
|
||||
for (const k of envKeys) {
|
||||
if (saved[k] === undefined) delete process.env[k];
|
||||
else process.env[k] = saved[k];
|
||||
}
|
||||
});
|
||||
|
||||
test("returns model-specific override when set", () => {
|
||||
process.env.VERTEX_REGION_CLAUDE_HAIKU_4_5 = "us-central1";
|
||||
expect(getVertexRegionForModel("claude-haiku-4-5-20251001")).toBe("us-central1");
|
||||
});
|
||||
|
||||
test("falls back to default vertex region when override not set", () => {
|
||||
delete process.env.VERTEX_REGION_CLAUDE_4_0_SONNET;
|
||||
delete process.env.CLOUD_ML_REGION;
|
||||
expect(getVertexRegionForModel("claude-sonnet-4-some-variant")).toBe("us-east5");
|
||||
});
|
||||
|
||||
test("returns default region for unknown model prefix", () => {
|
||||
delete process.env.CLOUD_ML_REGION;
|
||||
expect(getVertexRegionForModel("unknown-model-123")).toBe("us-east5");
|
||||
});
|
||||
|
||||
test("returns default region for undefined model", () => {
|
||||
delete process.env.CLOUD_ML_REGION;
|
||||
expect(getVertexRegionForModel(undefined)).toBe("us-east5");
|
||||
});
|
||||
});
|
||||
|
||||
// ─── isBareMode ────────────────────────────────────────────────────────
|
||||
|
||||
describe("isBareMode", () => {
|
||||
const saved = process.env.CLAUDE_CODE_SIMPLE;
|
||||
const originalArgv = [...process.argv];
|
||||
|
||||
afterEach(() => {
|
||||
if (saved === undefined) delete process.env.CLAUDE_CODE_SIMPLE;
|
||||
else process.env.CLAUDE_CODE_SIMPLE = saved;
|
||||
process.argv.length = 0;
|
||||
process.argv.push(...originalArgv);
|
||||
});
|
||||
|
||||
test("returns true when CLAUDE_CODE_SIMPLE=1", () => {
|
||||
process.env.CLAUDE_CODE_SIMPLE = "1";
|
||||
expect(isBareMode()).toBe(true);
|
||||
});
|
||||
|
||||
test("returns true when --bare in argv", () => {
|
||||
process.argv.push("--bare");
|
||||
expect(isBareMode()).toBe(true);
|
||||
});
|
||||
|
||||
test("returns false when neither set", () => {
|
||||
delete process.env.CLAUDE_CODE_SIMPLE;
|
||||
// argv doesn't have --bare by default
|
||||
expect(isBareMode()).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── shouldMaintainProjectWorkingDir ───────────────────────────────────
|
||||
|
||||
describe("shouldMaintainProjectWorkingDir", () => {
|
||||
const saved = process.env.CLAUDE_BASH_MAINTAIN_PROJECT_WORKING_DIR;
|
||||
|
||||
afterEach(() => {
|
||||
if (saved === undefined) delete process.env.CLAUDE_BASH_MAINTAIN_PROJECT_WORKING_DIR;
|
||||
else process.env.CLAUDE_BASH_MAINTAIN_PROJECT_WORKING_DIR = saved;
|
||||
});
|
||||
|
||||
test("returns true when set to truthy", () => {
|
||||
process.env.CLAUDE_BASH_MAINTAIN_PROJECT_WORKING_DIR = "1";
|
||||
expect(shouldMaintainProjectWorkingDir()).toBe(true);
|
||||
});
|
||||
|
||||
test("returns false when not set", () => {
|
||||
delete process.env.CLAUDE_BASH_MAINTAIN_PROJECT_WORKING_DIR;
|
||||
expect(shouldMaintainProjectWorkingDir()).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── getClaudeConfigHomeDir ────────────────────────────────────────────
|
||||
|
||||
describe("getClaudeConfigHomeDir", () => {
|
||||
const saved = process.env.CLAUDE_CONFIG_DIR;
|
||||
|
||||
afterEach(() => {
|
||||
if (saved === undefined) delete process.env.CLAUDE_CONFIG_DIR;
|
||||
else process.env.CLAUDE_CONFIG_DIR = saved;
|
||||
});
|
||||
|
||||
test("uses CLAUDE_CONFIG_DIR when set", () => {
|
||||
process.env.CLAUDE_CONFIG_DIR = "/tmp/test-claude";
|
||||
// Memoized by CLAUDE_CONFIG_DIR key, so changing env gives fresh value
|
||||
expect(getClaudeConfigHomeDir()).toBe("/tmp/test-claude");
|
||||
});
|
||||
|
||||
test("returns a string ending with .claude by default", () => {
|
||||
delete process.env.CLAUDE_CONFIG_DIR;
|
||||
const result = getClaudeConfigHomeDir();
|
||||
expect(result).toMatch(/\.claude$/);
|
||||
});
|
||||
});
|
||||
74
src/utils/__tests__/envValidation.test.ts
Normal file
74
src/utils/__tests__/envValidation.test.ts
Normal file
@ -0,0 +1,74 @@
|
||||
import { mock, describe, expect, test } from "bun:test";
|
||||
|
||||
// Mock debug.ts to cut bootstrap/state dependency chain
|
||||
mock.module("src/utils/debug.ts", () => ({
|
||||
logForDebugging: () => {},
|
||||
isDebugMode: () => false,
|
||||
isDebugToStdErr: () => false,
|
||||
getDebugFilePath: () => null,
|
||||
getDebugFilter: () => null,
|
||||
getMinDebugLogLevel: () => "debug",
|
||||
getDebugLogPath: () => "/tmp/mock-debug.log",
|
||||
flushDebugLogs: async () => {},
|
||||
enableDebugLogging: () => false,
|
||||
setHasFormattedOutput: () => {},
|
||||
getHasFormattedOutput: () => false,
|
||||
logAntError: () => {},
|
||||
}));
|
||||
|
||||
const { validateBoundedIntEnvVar } = await import("../envValidation");
|
||||
|
||||
describe("validateBoundedIntEnvVar", () => {
|
||||
test("returns default when value is undefined", () => {
|
||||
const result = validateBoundedIntEnvVar("TEST_VAR", undefined, 100, 1000);
|
||||
expect(result).toEqual({ effective: 100, status: "valid" });
|
||||
});
|
||||
|
||||
test("returns default when value is empty string", () => {
|
||||
const result = validateBoundedIntEnvVar("TEST_VAR", "", 100, 1000);
|
||||
expect(result).toEqual({ effective: 100, status: "valid" });
|
||||
});
|
||||
|
||||
test("returns parsed value when valid and within limit", () => {
|
||||
const result = validateBoundedIntEnvVar("TEST_VAR", "500", 100, 1000);
|
||||
expect(result).toEqual({ effective: 500, status: "valid" });
|
||||
});
|
||||
|
||||
test("caps value at upper limit", () => {
|
||||
const result = validateBoundedIntEnvVar("TEST_VAR", "2000", 100, 1000);
|
||||
expect(result.effective).toBe(1000);
|
||||
expect(result.status).toBe("capped");
|
||||
expect(result.message).toContain("Capped from 2000 to 1000");
|
||||
});
|
||||
|
||||
test("returns default for non-numeric value", () => {
|
||||
const result = validateBoundedIntEnvVar("TEST_VAR", "abc", 100, 1000);
|
||||
expect(result.effective).toBe(100);
|
||||
expect(result.status).toBe("invalid");
|
||||
expect(result.message).toContain("Invalid value");
|
||||
});
|
||||
|
||||
test("returns default for zero", () => {
|
||||
const result = validateBoundedIntEnvVar("TEST_VAR", "0", 100, 1000);
|
||||
expect(result.effective).toBe(100);
|
||||
expect(result.status).toBe("invalid");
|
||||
});
|
||||
|
||||
test("returns default for negative value", () => {
|
||||
const result = validateBoundedIntEnvVar("TEST_VAR", "-5", 100, 1000);
|
||||
expect(result.effective).toBe(100);
|
||||
expect(result.status).toBe("invalid");
|
||||
});
|
||||
|
||||
test("handles value at exact upper limit", () => {
|
||||
const result = validateBoundedIntEnvVar("TEST_VAR", "1000", 100, 1000);
|
||||
expect(result.effective).toBe(1000);
|
||||
expect(result.status).toBe("valid");
|
||||
});
|
||||
|
||||
test("handles value of 1 (minimum valid)", () => {
|
||||
const result = validateBoundedIntEnvVar("TEST_VAR", "1", 100, 1000);
|
||||
expect(result.effective).toBe(1);
|
||||
expect(result.status).toBe("valid");
|
||||
});
|
||||
});
|
||||
152
src/utils/__tests__/groupToolUses.test.ts
Normal file
152
src/utils/__tests__/groupToolUses.test.ts
Normal file
@ -0,0 +1,152 @@
|
||||
import { describe, expect, test } from "bun:test";
|
||||
import { applyGrouping } from "../groupToolUses";
|
||||
|
||||
// Helper: build minimal tool-use assistant message
|
||||
function makeToolUseMsg(
|
||||
uuid: string,
|
||||
messageId: string,
|
||||
toolUseId: string,
|
||||
toolName: string
|
||||
): any {
|
||||
return {
|
||||
type: "assistant",
|
||||
uuid,
|
||||
timestamp: Date.now(),
|
||||
message: {
|
||||
id: messageId,
|
||||
content: [{ type: "tool_use", id: toolUseId, name: toolName, input: {} }],
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// Helper: build minimal tool-result user message
|
||||
function makeToolResultMsg(uuid: string, toolUseId: string): any {
|
||||
return {
|
||||
type: "user",
|
||||
uuid,
|
||||
timestamp: Date.now(),
|
||||
message: {
|
||||
content: [{ type: "tool_result", tool_use_id: toolUseId, content: "ok" }],
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// Helper: build minimal text assistant message
|
||||
function makeTextMsg(uuid: string, text: string): any {
|
||||
return {
|
||||
type: "assistant",
|
||||
uuid,
|
||||
timestamp: Date.now(),
|
||||
message: { id: `msg-${uuid}`, content: [{ type: "text", text }] },
|
||||
};
|
||||
}
|
||||
|
||||
// Minimal tool definitions
|
||||
const groupableTool: any = { name: "Grep", renderGroupedToolUse: true };
|
||||
const nonGroupableTool: any = { name: "Bash", renderGroupedToolUse: undefined };
|
||||
|
||||
// ─── applyGrouping ────────────────────────────────────────────────────
|
||||
|
||||
describe("applyGrouping", () => {
|
||||
test("returns all messages in verbose mode", () => {
|
||||
const msgs = [
|
||||
makeToolUseMsg("u1", "m1", "tu1", "Grep"),
|
||||
makeToolUseMsg("u2", "m1", "tu2", "Grep"),
|
||||
];
|
||||
const result = applyGrouping(msgs, [groupableTool], true);
|
||||
expect(result.messages).toHaveLength(2);
|
||||
expect(result.messages).toBe(msgs); // same reference
|
||||
});
|
||||
|
||||
test("does not group when tool lacks renderGroupedToolUse", () => {
|
||||
const msgs = [
|
||||
makeToolUseMsg("u1", "m1", "tu1", "Bash"),
|
||||
makeToolUseMsg("u2", "m1", "tu2", "Bash"),
|
||||
];
|
||||
const result = applyGrouping(msgs, [nonGroupableTool]);
|
||||
expect(result.messages).toHaveLength(2);
|
||||
// Both messages should pass through as-is
|
||||
expect(result.messages[0]).toBe(msgs[0]);
|
||||
});
|
||||
|
||||
test("does not group single tool use", () => {
|
||||
const msgs = [makeToolUseMsg("u1", "m1", "tu1", "Grep")];
|
||||
const result = applyGrouping(msgs, [groupableTool]);
|
||||
expect(result.messages).toHaveLength(1);
|
||||
expect((result.messages[0] as any).type).toBe("assistant");
|
||||
});
|
||||
|
||||
test("groups 2+ tool uses of same type from same message", () => {
|
||||
const msgs = [
|
||||
makeToolUseMsg("u1", "m1", "tu1", "Grep"),
|
||||
makeToolUseMsg("u2", "m1", "tu2", "Grep"),
|
||||
makeToolUseMsg("u3", "m1", "tu3", "Grep"),
|
||||
];
|
||||
const result = applyGrouping(msgs, [groupableTool]);
|
||||
expect(result.messages).toHaveLength(1);
|
||||
const grouped = result.messages[0] as any;
|
||||
expect(grouped.type).toBe("grouped_tool_use");
|
||||
expect(grouped.toolName).toBe("Grep");
|
||||
expect(grouped.messages).toHaveLength(3);
|
||||
});
|
||||
|
||||
test("does not group tool uses from different messages", () => {
|
||||
const msgs = [
|
||||
makeToolUseMsg("u1", "m1", "tu1", "Grep"),
|
||||
makeToolUseMsg("u2", "m2", "tu2", "Grep"),
|
||||
];
|
||||
const result = applyGrouping(msgs, [groupableTool]);
|
||||
// Each belongs to a different message.id, so no group (< 2 per group)
|
||||
expect(result.messages).toHaveLength(2);
|
||||
});
|
||||
|
||||
test("collects tool results for grouped uses", () => {
|
||||
const msgs = [
|
||||
makeToolUseMsg("u1", "m1", "tu1", "Grep"),
|
||||
makeToolUseMsg("u2", "m1", "tu2", "Grep"),
|
||||
makeToolResultMsg("u3", "tu1"),
|
||||
makeToolResultMsg("u4", "tu2"),
|
||||
];
|
||||
const result = applyGrouping(msgs, [groupableTool]);
|
||||
const grouped = result.messages[0] as any;
|
||||
expect(grouped.type).toBe("grouped_tool_use");
|
||||
expect(grouped.results).toHaveLength(2);
|
||||
});
|
||||
|
||||
test("skips user messages whose tool_results are all grouped", () => {
|
||||
const msgs = [
|
||||
makeToolUseMsg("u1", "m1", "tu1", "Grep"),
|
||||
makeToolUseMsg("u2", "m1", "tu2", "Grep"),
|
||||
makeToolResultMsg("u3", "tu1"),
|
||||
makeToolResultMsg("u4", "tu2"),
|
||||
];
|
||||
const result = applyGrouping(msgs, [groupableTool]);
|
||||
// Only the grouped message should remain — result messages are consumed
|
||||
expect(result.messages).toHaveLength(1);
|
||||
});
|
||||
|
||||
test("preserves non-grouped messages alongside groups", () => {
|
||||
const msgs = [
|
||||
makeTextMsg("u0", "thinking..."),
|
||||
makeToolUseMsg("u1", "m1", "tu1", "Grep"),
|
||||
makeToolUseMsg("u2", "m1", "tu2", "Grep"),
|
||||
makeTextMsg("u3", "done"),
|
||||
];
|
||||
const result = applyGrouping(msgs, [groupableTool]);
|
||||
expect(result.messages).toHaveLength(3); // text + grouped + text
|
||||
expect((result.messages[0] as any).type).toBe("assistant");
|
||||
expect((result.messages[1] as any).type).toBe("grouped_tool_use");
|
||||
expect((result.messages[2] as any).type).toBe("assistant");
|
||||
});
|
||||
|
||||
test("handles empty messages array", () => {
|
||||
const result = applyGrouping([], [groupableTool]);
|
||||
expect(result.messages).toHaveLength(0);
|
||||
});
|
||||
|
||||
test("handles empty tools array", () => {
|
||||
const msgs = [makeToolUseMsg("u1", "m1", "tu1", "Grep")];
|
||||
const result = applyGrouping(msgs, []);
|
||||
expect(result.messages).toHaveLength(1);
|
||||
});
|
||||
});
|
||||
240
src/utils/__tests__/memoize.test.ts
Normal file
240
src/utils/__tests__/memoize.test.ts
Normal file
@ -0,0 +1,240 @@
|
||||
import { mock, describe, expect, test, beforeEach } from "bun:test";
|
||||
|
||||
// Mock heavy deps before importing memoize
|
||||
mock.module("src/utils/log.ts", () => ({
|
||||
logError: () => {},
|
||||
logToFile: () => {},
|
||||
getLogDisplayTitle: () => "",
|
||||
logEvent: () => {},
|
||||
}));
|
||||
mock.module("src/utils/slowOperations.ts", () => ({
|
||||
jsonStringify: JSON.stringify,
|
||||
jsonParse: JSON.parse,
|
||||
slowLogging: { enabled: false },
|
||||
clone: (v: any) => structuredClone(v),
|
||||
cloneDeep: (v: any) => structuredClone(v),
|
||||
callerFrame: () => "",
|
||||
SLOW_OPERATION_THRESHOLD_MS: 100,
|
||||
writeFileSync_DEPRECATED: () => {},
|
||||
}));
|
||||
|
||||
const { memoizeWithTTL, memoizeWithTTLAsync, memoizeWithLRU } = await import(
|
||||
"../memoize"
|
||||
);
|
||||
|
||||
// ─── memoizeWithTTL ────────────────────────────────────────────────────
|
||||
|
||||
describe("memoizeWithTTL", () => {
|
||||
test("returns cached value on second call", () => {
|
||||
let calls = 0;
|
||||
const fn = memoizeWithTTL((x: number) => {
|
||||
calls++;
|
||||
return x * 2;
|
||||
}, 60_000);
|
||||
|
||||
expect(fn(5)).toBe(10);
|
||||
expect(fn(5)).toBe(10);
|
||||
expect(calls).toBe(1);
|
||||
});
|
||||
|
||||
test("different args get separate cache entries", () => {
|
||||
let calls = 0;
|
||||
const fn = memoizeWithTTL((x: number) => {
|
||||
calls++;
|
||||
return x + 1;
|
||||
}, 60_000);
|
||||
|
||||
expect(fn(1)).toBe(2);
|
||||
expect(fn(2)).toBe(3);
|
||||
expect(calls).toBe(2);
|
||||
});
|
||||
|
||||
test("cache.clear empties the cache", () => {
|
||||
let calls = 0;
|
||||
const fn = memoizeWithTTL(() => {
|
||||
calls++;
|
||||
return "val";
|
||||
}, 60_000);
|
||||
|
||||
fn();
|
||||
fn.cache.clear();
|
||||
fn();
|
||||
expect(calls).toBe(2);
|
||||
});
|
||||
|
||||
test("returns stale value and triggers background refresh after TTL", async () => {
|
||||
let calls = 0;
|
||||
const fn = memoizeWithTTL((x: number) => {
|
||||
calls++;
|
||||
return x * calls;
|
||||
}, 1); // 1ms TTL
|
||||
|
||||
const first = fn(10);
|
||||
expect(first).toBe(10); // calls=1, 10*1
|
||||
|
||||
// Wait for TTL to expire
|
||||
await new Promise((r) => setTimeout(r, 10));
|
||||
|
||||
// Should return stale value (10) and trigger background refresh
|
||||
const second = fn(10);
|
||||
expect(second).toBe(10); // stale value returned immediately
|
||||
|
||||
// Wait for background refresh microtask
|
||||
await new Promise((r) => setTimeout(r, 10));
|
||||
|
||||
// Now cache should have refreshed value (calls=2 during refresh, 10*2=20)
|
||||
const third = fn(10);
|
||||
expect(third).toBe(20);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── memoizeWithTTLAsync ───────────────────────────────────────────────
|
||||
|
||||
describe("memoizeWithTTLAsync", () => {
|
||||
test("caches async result", async () => {
|
||||
let calls = 0;
|
||||
const fn = memoizeWithTTLAsync(async (x: number) => {
|
||||
calls++;
|
||||
return x * 2;
|
||||
}, 60_000);
|
||||
|
||||
expect(await fn(5)).toBe(10);
|
||||
expect(await fn(5)).toBe(10);
|
||||
expect(calls).toBe(1);
|
||||
});
|
||||
|
||||
test("deduplicates concurrent cold-miss calls", async () => {
|
||||
let calls = 0;
|
||||
const fn = memoizeWithTTLAsync(async (x: number) => {
|
||||
calls++;
|
||||
await new Promise((r) => setTimeout(r, 20));
|
||||
return x;
|
||||
}, 60_000);
|
||||
|
||||
const [a, b, c] = await Promise.all([fn(1), fn(1), fn(1)]);
|
||||
expect(a).toBe(1);
|
||||
expect(b).toBe(1);
|
||||
expect(c).toBe(1);
|
||||
expect(calls).toBe(1);
|
||||
});
|
||||
|
||||
test("cache.clear forces re-computation", async () => {
|
||||
let calls = 0;
|
||||
const fn = memoizeWithTTLAsync(async () => {
|
||||
calls++;
|
||||
return "v";
|
||||
}, 60_000);
|
||||
|
||||
await fn();
|
||||
fn.cache.clear();
|
||||
await fn();
|
||||
expect(calls).toBe(2);
|
||||
});
|
||||
|
||||
test("returns stale value on TTL expiry", async () => {
|
||||
let calls = 0;
|
||||
const fn = memoizeWithTTLAsync(async () => {
|
||||
calls++;
|
||||
return calls;
|
||||
}, 1); // 1ms TTL
|
||||
|
||||
const first = await fn();
|
||||
expect(first).toBe(1);
|
||||
|
||||
await new Promise((r) => setTimeout(r, 10));
|
||||
|
||||
// Should return stale value (1) immediately
|
||||
const second = await fn();
|
||||
expect(second).toBe(1);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── memoizeWithLRU ────────────────────────────────────────────────────
|
||||
|
||||
describe("memoizeWithLRU", () => {
|
||||
test("caches results by key", () => {
|
||||
let calls = 0;
|
||||
const fn = memoizeWithLRU(
|
||||
(x: number) => {
|
||||
calls++;
|
||||
return x * 2;
|
||||
},
|
||||
(x) => String(x),
|
||||
10
|
||||
);
|
||||
|
||||
expect(fn(5)).toBe(10);
|
||||
expect(fn(5)).toBe(10);
|
||||
expect(calls).toBe(1);
|
||||
});
|
||||
|
||||
test("evicts least recently used when max reached", () => {
|
||||
let calls = 0;
|
||||
const fn = memoizeWithLRU(
|
||||
(x: number) => {
|
||||
calls++;
|
||||
return x;
|
||||
},
|
||||
(x) => String(x),
|
||||
3
|
||||
);
|
||||
|
||||
fn(1);
|
||||
fn(2);
|
||||
fn(3);
|
||||
expect(calls).toBe(3);
|
||||
|
||||
fn(4); // evicts key "1"
|
||||
expect(fn.cache.has("1")).toBe(false);
|
||||
expect(fn.cache.has("4")).toBe(true);
|
||||
});
|
||||
|
||||
test("cache.size returns current size", () => {
|
||||
const fn = memoizeWithLRU(
|
||||
(x: number) => x,
|
||||
(x) => String(x),
|
||||
10
|
||||
);
|
||||
|
||||
fn(1);
|
||||
fn(2);
|
||||
expect(fn.cache.size()).toBe(2);
|
||||
});
|
||||
|
||||
test("cache.delete removes entry", () => {
|
||||
const fn = memoizeWithLRU(
|
||||
(x: number) => x,
|
||||
(x) => String(x),
|
||||
10
|
||||
);
|
||||
|
||||
fn(1);
|
||||
expect(fn.cache.has("1")).toBe(true);
|
||||
fn.cache.delete("1");
|
||||
expect(fn.cache.has("1")).toBe(false);
|
||||
});
|
||||
|
||||
test("cache.get returns value without updating recency", () => {
|
||||
const fn = memoizeWithLRU(
|
||||
(x: number) => x * 10,
|
||||
(x) => String(x),
|
||||
10
|
||||
);
|
||||
|
||||
fn(5);
|
||||
expect(fn.cache.get("5")).toBe(50);
|
||||
});
|
||||
|
||||
test("cache.clear empties everything", () => {
|
||||
const fn = memoizeWithLRU(
|
||||
(x: number) => x,
|
||||
(x) => String(x),
|
||||
10
|
||||
);
|
||||
|
||||
fn(1);
|
||||
fn(2);
|
||||
fn.cache.clear();
|
||||
expect(fn.cache.size()).toBe(0);
|
||||
});
|
||||
});
|
||||
130
src/utils/__tests__/sleep.test.ts
Normal file
130
src/utils/__tests__/sleep.test.ts
Normal file
@ -0,0 +1,130 @@
|
||||
import { describe, expect, test } from "bun:test";
|
||||
import { sleep, withTimeout } from "../sleep";
|
||||
import { sequential } from "../sequential";
|
||||
|
||||
// ─── sleep ─────────────────────────────────────────────────────────────
|
||||
|
||||
describe("sleep", () => {
|
||||
test("resolves after timeout", async () => {
|
||||
const start = Date.now();
|
||||
await sleep(50);
|
||||
expect(Date.now() - start).toBeGreaterThanOrEqual(40);
|
||||
});
|
||||
|
||||
test("resolves immediately when signal already aborted", async () => {
|
||||
const ac = new AbortController();
|
||||
ac.abort();
|
||||
const start = Date.now();
|
||||
await sleep(10_000, ac.signal);
|
||||
expect(Date.now() - start).toBeLessThan(50);
|
||||
});
|
||||
|
||||
test("resolves early on abort (default: no throw)", async () => {
|
||||
const ac = new AbortController();
|
||||
const start = Date.now();
|
||||
const p = sleep(10_000, ac.signal);
|
||||
setTimeout(() => ac.abort(), 30);
|
||||
await p;
|
||||
expect(Date.now() - start).toBeLessThan(200);
|
||||
});
|
||||
|
||||
test("rejects on abort with throwOnAbort", async () => {
|
||||
const ac = new AbortController();
|
||||
ac.abort();
|
||||
await expect(
|
||||
sleep(10_000, ac.signal, { throwOnAbort: true })
|
||||
).rejects.toThrow("aborted");
|
||||
});
|
||||
|
||||
test("rejects with custom abortError", async () => {
|
||||
const ac = new AbortController();
|
||||
ac.abort();
|
||||
const customErr = () => new Error("custom abort");
|
||||
await expect(
|
||||
sleep(10_000, ac.signal, { abortError: customErr })
|
||||
).rejects.toThrow("custom abort");
|
||||
});
|
||||
|
||||
test("throwOnAbort rejects on mid-sleep abort", async () => {
|
||||
const ac = new AbortController();
|
||||
const p = sleep(10_000, ac.signal, { throwOnAbort: true });
|
||||
setTimeout(() => ac.abort(), 20);
|
||||
await expect(p).rejects.toThrow("aborted");
|
||||
});
|
||||
|
||||
test("works without signal", async () => {
|
||||
await sleep(10);
|
||||
// just verify it resolves
|
||||
});
|
||||
});
|
||||
|
||||
// ─── withTimeout ───────────────────────────────────────────────────────
|
||||
|
||||
describe("withTimeout", () => {
|
||||
test("resolves when promise completes before timeout", async () => {
|
||||
const result = await withTimeout(
|
||||
Promise.resolve(42),
|
||||
1000,
|
||||
"timed out"
|
||||
);
|
||||
expect(result).toBe(42);
|
||||
});
|
||||
|
||||
test("rejects when promise takes too long", async () => {
|
||||
const slow = new Promise((resolve) => setTimeout(resolve, 5000));
|
||||
await expect(
|
||||
withTimeout(slow, 50, "operation timed out")
|
||||
).rejects.toThrow("operation timed out");
|
||||
});
|
||||
|
||||
test("rejects propagate through", async () => {
|
||||
await expect(
|
||||
withTimeout(Promise.reject(new Error("inner")), 1000, "timeout")
|
||||
).rejects.toThrow("inner");
|
||||
});
|
||||
});
|
||||
|
||||
// ─── sequential ────────────────────────────────────────────────────────
|
||||
|
||||
describe("sequential", () => {
|
||||
test("executes calls in order", async () => {
|
||||
const order: number[] = [];
|
||||
const fn = sequential(async (n: number) => {
|
||||
await sleep(10);
|
||||
order.push(n);
|
||||
return n;
|
||||
});
|
||||
|
||||
const results = await Promise.all([fn(1), fn(2), fn(3)]);
|
||||
expect(order).toEqual([1, 2, 3]);
|
||||
expect(results).toEqual([1, 2, 3]);
|
||||
});
|
||||
|
||||
test("returns correct result for each call", async () => {
|
||||
const fn = sequential(async (x: number) => x * 2);
|
||||
const r1 = await fn(5);
|
||||
const r2 = await fn(10);
|
||||
expect(r1).toBe(10);
|
||||
expect(r2).toBe(20);
|
||||
});
|
||||
|
||||
test("propagates errors without blocking queue", async () => {
|
||||
const fn = sequential(async (x: number) => {
|
||||
if (x === 2) throw new Error("fail");
|
||||
return x;
|
||||
});
|
||||
|
||||
const p1 = fn(1);
|
||||
const p2 = fn(2);
|
||||
const p3 = fn(3);
|
||||
|
||||
expect(await p1).toBe(1);
|
||||
await expect(p2).rejects.toThrow("fail");
|
||||
expect(await p3).toBe(3);
|
||||
});
|
||||
|
||||
test("handles single call", async () => {
|
||||
const fn = sequential(async (s: string) => s.toUpperCase());
|
||||
expect(await fn("hello")).toBe("HELLO");
|
||||
});
|
||||
});
|
||||
72
src/utils/__tests__/zodToJsonSchema.test.ts
Normal file
72
src/utils/__tests__/zodToJsonSchema.test.ts
Normal file
@ -0,0 +1,72 @@
|
||||
import { describe, expect, test } from "bun:test";
|
||||
import z from "zod/v4";
|
||||
import { zodToJsonSchema } from "../zodToJsonSchema";
|
||||
|
||||
describe("zodToJsonSchema", () => {
|
||||
test("converts string schema", () => {
|
||||
const schema = z.string();
|
||||
const result = zodToJsonSchema(schema);
|
||||
expect(result.type).toBe("string");
|
||||
});
|
||||
|
||||
test("converts number schema", () => {
|
||||
const schema = z.number();
|
||||
const result = zodToJsonSchema(schema);
|
||||
expect(result.type).toBe("number");
|
||||
});
|
||||
|
||||
test("converts object schema with properties", () => {
|
||||
const schema = z.object({
|
||||
name: z.string(),
|
||||
age: z.number(),
|
||||
});
|
||||
const result = zodToJsonSchema(schema);
|
||||
expect(result.type).toBe("object");
|
||||
expect(result.properties).toBeDefined();
|
||||
expect((result.properties as any).name).toBeDefined();
|
||||
expect((result.properties as any).age).toBeDefined();
|
||||
});
|
||||
|
||||
test("converts enum schema", () => {
|
||||
const schema = z.enum(["a", "b", "c"]);
|
||||
const result = zodToJsonSchema(schema);
|
||||
expect(result.enum).toEqual(["a", "b", "c"]);
|
||||
});
|
||||
|
||||
test("converts optional fields", () => {
|
||||
const schema = z.object({
|
||||
required: z.string(),
|
||||
optional: z.string().optional(),
|
||||
});
|
||||
const result = zodToJsonSchema(schema);
|
||||
expect(result.required).toContain("required");
|
||||
});
|
||||
|
||||
test("caches results for same schema reference", () => {
|
||||
const schema = z.string();
|
||||
const first = zodToJsonSchema(schema);
|
||||
const second = zodToJsonSchema(schema);
|
||||
expect(first).toBe(second); // same reference (cached)
|
||||
});
|
||||
|
||||
test("different schemas get different results", () => {
|
||||
const s1 = z.string();
|
||||
const s2 = z.number();
|
||||
const r1 = zodToJsonSchema(s1);
|
||||
const r2 = zodToJsonSchema(s2);
|
||||
expect(r1).not.toBe(r2);
|
||||
expect(r1.type).not.toBe(r2.type);
|
||||
});
|
||||
|
||||
test("converts array schema", () => {
|
||||
const schema = z.array(z.string());
|
||||
const result = zodToJsonSchema(schema);
|
||||
expect(result.type).toBe("array");
|
||||
expect((result.items as any).type).toBe("string");
|
||||
});
|
||||
|
||||
test("converts boolean schema", () => {
|
||||
const result = zodToJsonSchema(z.boolean());
|
||||
expect(result.type).toBe("boolean");
|
||||
});
|
||||
});
|
||||
162
src/utils/permissions/__tests__/PermissionMode.test.ts
Normal file
162
src/utils/permissions/__tests__/PermissionMode.test.ts
Normal file
@ -0,0 +1,162 @@
|
||||
import { mock, describe, expect, test } from "bun:test";
|
||||
|
||||
// Mock slowOperations to cut bootstrap/state dependency chain
|
||||
// (figures.js → env.js → fsOperations.js → slowOperations.js → bootstrap/state.js)
|
||||
mock.module("src/utils/slowOperations.ts", () => ({
|
||||
jsonStringify: JSON.stringify,
|
||||
jsonParse: JSON.parse,
|
||||
slowLogging: { enabled: false },
|
||||
clone: (v: any) => structuredClone(v),
|
||||
cloneDeep: (v: any) => structuredClone(v),
|
||||
callerFrame: () => "",
|
||||
SLOW_OPERATION_THRESHOLD_MS: 100,
|
||||
writeFileSync_DEPRECATED: () => {},
|
||||
}));
|
||||
mock.module("src/utils/log.ts", () => ({
|
||||
logError: () => {},
|
||||
logToFile: () => {},
|
||||
getLogDisplayTitle: () => "",
|
||||
logEvent: () => {},
|
||||
}));
|
||||
|
||||
const {
|
||||
isExternalPermissionMode,
|
||||
toExternalPermissionMode,
|
||||
permissionModeFromString,
|
||||
permissionModeTitle,
|
||||
isDefaultMode,
|
||||
permissionModeShortTitle,
|
||||
permissionModeSymbol,
|
||||
getModeColor,
|
||||
PERMISSION_MODES,
|
||||
EXTERNAL_PERMISSION_MODES,
|
||||
} = await import("../PermissionMode");
|
||||
|
||||
// ─── PERMISSION_MODES / EXTERNAL_PERMISSION_MODES ──────────────────────
|
||||
|
||||
describe("PERMISSION_MODES", () => {
|
||||
test("includes all external modes", () => {
|
||||
for (const m of EXTERNAL_PERMISSION_MODES) {
|
||||
expect(PERMISSION_MODES).toContain(m);
|
||||
}
|
||||
});
|
||||
|
||||
test("includes default, plan, acceptEdits, bypassPermissions, dontAsk", () => {
|
||||
expect(PERMISSION_MODES).toContain("default");
|
||||
expect(PERMISSION_MODES).toContain("plan");
|
||||
expect(PERMISSION_MODES).toContain("acceptEdits");
|
||||
expect(PERMISSION_MODES).toContain("bypassPermissions");
|
||||
expect(PERMISSION_MODES).toContain("dontAsk");
|
||||
});
|
||||
});
|
||||
|
||||
// ─── permissionModeFromString ──────────────────────────────────────────
|
||||
|
||||
describe("permissionModeFromString", () => {
|
||||
test("returns valid mode for known string", () => {
|
||||
expect(permissionModeFromString("plan")).toBe("plan");
|
||||
expect(permissionModeFromString("default")).toBe("default");
|
||||
expect(permissionModeFromString("dontAsk")).toBe("dontAsk");
|
||||
});
|
||||
|
||||
test("returns 'default' for unknown string", () => {
|
||||
expect(permissionModeFromString("unknown")).toBe("default");
|
||||
expect(permissionModeFromString("")).toBe("default");
|
||||
});
|
||||
});
|
||||
|
||||
// ─── permissionModeTitle ───────────────────────────────────────────────
|
||||
|
||||
describe("permissionModeTitle", () => {
|
||||
test("returns title for known modes", () => {
|
||||
expect(permissionModeTitle("default")).toBe("Default");
|
||||
expect(permissionModeTitle("plan")).toBe("Plan Mode");
|
||||
expect(permissionModeTitle("acceptEdits")).toBe("Accept edits");
|
||||
});
|
||||
|
||||
test("falls back to Default for unknown mode", () => {
|
||||
expect(permissionModeTitle("nonexistent" as any)).toBe("Default");
|
||||
});
|
||||
});
|
||||
|
||||
// ─── permissionModeShortTitle ──────────────────────────────────────────
|
||||
|
||||
describe("permissionModeShortTitle", () => {
|
||||
test("returns short title for known modes", () => {
|
||||
expect(permissionModeShortTitle("default")).toBe("Default");
|
||||
expect(permissionModeShortTitle("plan")).toBe("Plan");
|
||||
expect(permissionModeShortTitle("bypassPermissions")).toBe("Bypass");
|
||||
});
|
||||
});
|
||||
|
||||
// ─── permissionModeSymbol ──────────────────────────────────────────────
|
||||
|
||||
describe("permissionModeSymbol", () => {
|
||||
test("returns empty string for default", () => {
|
||||
expect(permissionModeSymbol("default")).toBe("");
|
||||
});
|
||||
|
||||
test("returns non-empty for non-default modes", () => {
|
||||
expect(permissionModeSymbol("plan").length).toBeGreaterThan(0);
|
||||
expect(permissionModeSymbol("acceptEdits").length).toBeGreaterThan(0);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── getModeColor ──────────────────────────────────────────────────────
|
||||
|
||||
describe("getModeColor", () => {
|
||||
test("returns 'text' for default", () => {
|
||||
expect(getModeColor("default")).toBe("text");
|
||||
});
|
||||
|
||||
test("returns 'planMode' for plan", () => {
|
||||
expect(getModeColor("plan")).toBe("planMode");
|
||||
});
|
||||
|
||||
test("returns 'error' for bypassPermissions", () => {
|
||||
expect(getModeColor("bypassPermissions")).toBe("error");
|
||||
});
|
||||
});
|
||||
|
||||
// ─── isDefaultMode ─────────────────────────────────────────────────────
|
||||
|
||||
describe("isDefaultMode", () => {
|
||||
test("returns true for 'default'", () => {
|
||||
expect(isDefaultMode("default")).toBe(true);
|
||||
});
|
||||
|
||||
test("returns true for undefined", () => {
|
||||
expect(isDefaultMode(undefined)).toBe(true);
|
||||
});
|
||||
|
||||
test("returns false for other modes", () => {
|
||||
expect(isDefaultMode("plan")).toBe(false);
|
||||
expect(isDefaultMode("dontAsk")).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── toExternalPermissionMode ──────────────────────────────────────────
|
||||
|
||||
describe("toExternalPermissionMode", () => {
|
||||
test("maps default to default", () => {
|
||||
expect(toExternalPermissionMode("default")).toBe("default");
|
||||
});
|
||||
|
||||
test("maps plan to plan", () => {
|
||||
expect(toExternalPermissionMode("plan")).toBe("plan");
|
||||
});
|
||||
|
||||
test("maps dontAsk to dontAsk", () => {
|
||||
expect(toExternalPermissionMode("dontAsk")).toBe("dontAsk");
|
||||
});
|
||||
});
|
||||
|
||||
// ─── isExternalPermissionMode ──────────────────────────────────────────
|
||||
|
||||
describe("isExternalPermissionMode", () => {
|
||||
test("returns true for external modes (non-ant)", () => {
|
||||
// USER_TYPE is not 'ant' in tests, so always true
|
||||
expect(isExternalPermissionMode("default")).toBe(true);
|
||||
expect(isExternalPermissionMode("plan")).toBe(true);
|
||||
});
|
||||
});
|
||||
55
src/utils/permissions/__tests__/dangerousPatterns.test.ts
Normal file
55
src/utils/permissions/__tests__/dangerousPatterns.test.ts
Normal file
@ -0,0 +1,55 @@
|
||||
import { describe, expect, test } from "bun:test";
|
||||
import {
|
||||
CROSS_PLATFORM_CODE_EXEC,
|
||||
DANGEROUS_BASH_PATTERNS,
|
||||
} from "../dangerousPatterns";
|
||||
|
||||
describe("CROSS_PLATFORM_CODE_EXEC", () => {
|
||||
test("is a non-empty readonly array of strings", () => {
|
||||
expect(CROSS_PLATFORM_CODE_EXEC.length).toBeGreaterThan(0);
|
||||
for (const p of CROSS_PLATFORM_CODE_EXEC) {
|
||||
expect(typeof p).toBe("string");
|
||||
}
|
||||
});
|
||||
|
||||
test("includes core interpreters", () => {
|
||||
expect(CROSS_PLATFORM_CODE_EXEC).toContain("python");
|
||||
expect(CROSS_PLATFORM_CODE_EXEC).toContain("node");
|
||||
expect(CROSS_PLATFORM_CODE_EXEC).toContain("ruby");
|
||||
expect(CROSS_PLATFORM_CODE_EXEC).toContain("perl");
|
||||
});
|
||||
|
||||
test("includes package runners", () => {
|
||||
expect(CROSS_PLATFORM_CODE_EXEC).toContain("npx");
|
||||
expect(CROSS_PLATFORM_CODE_EXEC).toContain("bunx");
|
||||
});
|
||||
|
||||
test("includes shells", () => {
|
||||
expect(CROSS_PLATFORM_CODE_EXEC).toContain("bash");
|
||||
expect(CROSS_PLATFORM_CODE_EXEC).toContain("sh");
|
||||
});
|
||||
});
|
||||
|
||||
describe("DANGEROUS_BASH_PATTERNS", () => {
|
||||
test("includes all cross-platform patterns", () => {
|
||||
for (const p of CROSS_PLATFORM_CODE_EXEC) {
|
||||
expect(DANGEROUS_BASH_PATTERNS).toContain(p);
|
||||
}
|
||||
});
|
||||
|
||||
test("includes unix-specific patterns", () => {
|
||||
expect(DANGEROUS_BASH_PATTERNS).toContain("zsh");
|
||||
expect(DANGEROUS_BASH_PATTERNS).toContain("fish");
|
||||
expect(DANGEROUS_BASH_PATTERNS).toContain("eval");
|
||||
expect(DANGEROUS_BASH_PATTERNS).toContain("exec");
|
||||
expect(DANGEROUS_BASH_PATTERNS).toContain("sudo");
|
||||
expect(DANGEROUS_BASH_PATTERNS).toContain("xargs");
|
||||
expect(DANGEROUS_BASH_PATTERNS).toContain("env");
|
||||
});
|
||||
|
||||
test("all elements are strings", () => {
|
||||
for (const p of DANGEROUS_BASH_PATTERNS) {
|
||||
expect(typeof p).toBe("string");
|
||||
}
|
||||
});
|
||||
});
|
||||
67
src/utils/shell/__tests__/outputLimits.test.ts
Normal file
67
src/utils/shell/__tests__/outputLimits.test.ts
Normal file
@ -0,0 +1,67 @@
|
||||
import { mock, describe, expect, test, afterEach } from "bun:test";
|
||||
|
||||
// Mock debug.ts to cut the bootstrap/state dependency chain
|
||||
mock.module("src/utils/debug.ts", () => ({
|
||||
logForDebugging: () => {},
|
||||
isDebugMode: () => false,
|
||||
isDebugToStdErr: () => false,
|
||||
getDebugFilePath: () => null,
|
||||
getDebugFilter: () => null,
|
||||
getMinDebugLogLevel: () => "debug",
|
||||
getDebugLogPath: () => "/tmp/mock-debug.log",
|
||||
flushDebugLogs: async () => {},
|
||||
enableDebugLogging: () => false,
|
||||
setHasFormattedOutput: () => {},
|
||||
getHasFormattedOutput: () => false,
|
||||
logAntError: () => {},
|
||||
}));
|
||||
|
||||
const {
|
||||
getMaxOutputLength,
|
||||
BASH_MAX_OUTPUT_UPPER_LIMIT,
|
||||
BASH_MAX_OUTPUT_DEFAULT,
|
||||
} = await import("../outputLimits");
|
||||
|
||||
describe("outputLimits constants", () => {
|
||||
test("BASH_MAX_OUTPUT_UPPER_LIMIT is 150000", () => {
|
||||
expect(BASH_MAX_OUTPUT_UPPER_LIMIT).toBe(150_000);
|
||||
});
|
||||
|
||||
test("BASH_MAX_OUTPUT_DEFAULT is 30000", () => {
|
||||
expect(BASH_MAX_OUTPUT_DEFAULT).toBe(30_000);
|
||||
});
|
||||
});
|
||||
|
||||
describe("getMaxOutputLength", () => {
|
||||
const saved = process.env.BASH_MAX_OUTPUT_LENGTH;
|
||||
|
||||
afterEach(() => {
|
||||
if (saved === undefined) delete process.env.BASH_MAX_OUTPUT_LENGTH;
|
||||
else process.env.BASH_MAX_OUTPUT_LENGTH = saved;
|
||||
});
|
||||
|
||||
test("returns default when env not set", () => {
|
||||
delete process.env.BASH_MAX_OUTPUT_LENGTH;
|
||||
expect(getMaxOutputLength()).toBe(30_000);
|
||||
});
|
||||
|
||||
test("returns parsed value when valid", () => {
|
||||
process.env.BASH_MAX_OUTPUT_LENGTH = "50000";
|
||||
expect(getMaxOutputLength()).toBe(50_000);
|
||||
});
|
||||
|
||||
test("caps at upper limit", () => {
|
||||
process.env.BASH_MAX_OUTPUT_LENGTH = "999999";
|
||||
expect(getMaxOutputLength()).toBe(150_000);
|
||||
});
|
||||
|
||||
test("returns default for invalid value", () => {
|
||||
process.env.BASH_MAX_OUTPUT_LENGTH = "not-a-number";
|
||||
expect(getMaxOutputLength()).toBe(30_000);
|
||||
});
|
||||
|
||||
test("returns default for negative value", () => {
|
||||
process.env.BASH_MAX_OUTPUT_LENGTH = "-1";
|
||||
expect(getMaxOutputLength()).toBe(30_000);
|
||||
});
|
||||
});
|
||||
Loading…
x
Reference in New Issue
Block a user