S

sf-data

by @jaganprov1.0.0
0.0(0)

Salesforce data operations with 130-point scoring. TRIGGER when: user creates test data, performs bulk import/export, uses sf data CLI commands, or needs data factory patterns for Apex tests. DO NOT TRIGGER when: SOQL query writing only (use sf-soql), Apex test execution (use sf-testing), or metadat

data-analysisGitHub
安装方式
npx skills add jaganpro/sf-skills --skill sf-data
compare_arrows

Before / After 效果对比

0

description 文档


name: sf-data description: > Salesforce data operations with 130-point scoring. TRIGGER when: user creates test data, performs bulk import/export, uses sf data CLI commands, or needs data factory patterns for Apex tests. DO NOT TRIGGER when: SOQL query writing only (use sf-soql), Apex test execution (use sf-testing), or metadata deployment (use sf-deploy). license: MIT metadata: version: "1.1.0" author: "Jag Valaiyapathy" scoring: "130 points across 7 categories"

Salesforce Data Operations Expert (sf-data)

Use this skill when the user needs Salesforce data work: record CRUD, bulk import/export, test data generation, cleanup scripts, or data factory patterns for validating Apex, Flow, or integration behavior.

When This Skill Owns the Task

Use sf-data when the work involves:

  • sf data CLI commands
  • record creation, update, delete, upsert, export, or tree import/export
  • realistic test data generation
  • bulk data operations and cleanup
  • Apex anonymous scripts for data seeding / rollback

Delegate elsewhere when the user is:


Important Mode Decision

Confirm which mode the user wants:

| Mode | Use when | |---|---| | Script generation | they want reusable .apex, CSV, or JSON assets without touching an org yet | | Remote execution | they want records created / changed in a real org now |

Do not assume remote execution if the user may only want scripts.


Required Context to Gather First

Ask for or infer:

  • target object(s)
  • org alias, if remote execution is required
  • operation type: query, create, update, delete, upsert, import, export, cleanup
  • expected volume
  • whether this is test data, migration data, or one-off troubleshooting data
  • any parent-child relationships that must exist first

Core Operating Rules

  • sf-data acts on remote org data unless the user explicitly wants local script generation.
  • Objects and fields must already exist before data creation.
  • For automation testing, prefer 251+ records when bulk behavior matters.
  • Always think about cleanup before creating large or noisy datasets.
  • Never use real PII in generated test data.

If metadata is missing, stop and hand off to:


Recommended Workflow

1. Verify prerequisites

Confirm object / field availability, org auth, and required parent records.

2. Choose the smallest correct mechanism

| Need | Default approach | |---|---| | small one-off CRUD | sf data single-record commands | | large import/export | Bulk API 2.0 via sf data ... bulk | | parent-child seed set | tree import/export | | reusable test dataset | factory / anonymous Apex script | | reversible experiment | cleanup script or savepoint-based approach |

3. Execute or generate assets

Use the built-in templates under assets/ when they fit:

  • assets/factories/
  • assets/bulk/
  • assets/cleanup/
  • assets/soql/
  • assets/csv/
  • assets/json/

4. Verify results

Check counts, relationships, and record IDs after creation or update.

5. Leave cleanup guidance

Provide exact cleanup commands or rollback assets whenever data was created.


High-Signal Rules

Bulk safety

  • use bulk operations for large volumes
  • test automation-sensitive behavior with 251+ records where appropriate
  • avoid one-record-at-a-time patterns for bulk scenarios

Data integrity

  • include required fields
  • verify parent IDs and relationship integrity
  • account for validation rules and duplicate constraints

Cleanup discipline

Prefer one of:

  • delete-by-ID
  • delete-by-pattern
  • delete-by-created-date window
  • rollback / savepoint patterns for script-based test runs

Common Failure Patterns

| Error | Likely cause | Default fix direction | |---|---|---| | INVALID_FIELD | wrong field API name or FLS issue | verify schema and access | | REQUIRED_FIELD_MISSING | mandatory field omitted | include required values | | INVALID_CROSS_REFERENCE_KEY | bad parent ID | create / verify parent first | | FIELD_CUSTOM_VALIDATION_EXCEPTION | validation rule blocked the record | use valid test data or adjust setup | | DUPLICATE_VALUE | unique-field conflict | query existing data first | | bulk limits / timeouts | wrong tool for the volume | switch to bulk / staged import |


Output Format

When finishing, report in this order:

  1. Operation performed
  2. Objects and counts
  3. Target org or local artifact path
  4. Record IDs / output files
  5. Verification result
  6. Cleanup instructions

Suggested shape:

Data operation: <create / update / delete / export / seed>
Objects: <object + counts>
Target: <org alias or local path>
Artifacts: <record ids / csv / apex / json files>
Verification: <passed / partial / failed>
Cleanup: <exact delete or rollback guidance>

Cross-Skill Integration

| Need | Delegate to | Reason | |---|---|---| | discover object / field structure | sf-metadata | accurate schema grounding | | run bulk-sensitive Apex validation | sf-testing | test execution and coverage | | deploy missing schema first | sf-deploy | metadata readiness | | implement production logic consuming the data | sf-apex or sf-flow | behavior implementation |


Reference Map

Start here

Query / bulk / cleanup

Examples / limits


Score Guide

| Score | Meaning | |---|---| | 117+ | strong production-safe data workflow | | 104–116 | good operation with minor improvements possible | | 91–103 | acceptable but review advised | | 78–90 | partial / risky patterns present | | < 78 | blocked until corrected |

forum用户评价 (0)

发表评价

效果
易用性
文档
兼容性

暂无评价,来写第一条吧

统计数据

安装量0
评分0.0 / 5.0
版本1.0.0
更新日期2026年3月17日
对比案例0 组

用户评分

0.0(0)
5
0%
4
0%
3
0%
2
0%
1
0%

为此 Skill 评分

0.0

兼容平台

🔧Claude Code

时间线

创建2026年3月17日
最后更新2026年3月17日