1 // Package runner implements a go/analysis runner. It makes heavy use
2 // of on-disk caching to reduce overall memory usage and to speed up
7 // A Runner maps a list of analyzers and package patterns to a list of
8 // results. Results provide access to diagnostics, directives, errors
9 // encountered, and information about packages. Results explicitly do
10 // not contain ASTs or type information. All position information is
11 // returned in the form of token.Position, not token.Pos. All work
12 // that requires access to the loaded representation of a package has
13 // to occur inside analyzers.
15 // Planning and execution
17 // Analyzing packages is split into two phases: planning and
20 // During planning, a directed acyclic graph of package dependencies
21 // is computed. We materialize the full graph so that we can execute
22 // the graph from the bottom up, without keeping unnecessary data in
23 // memory during a DFS and with simplified parallel execution.
25 // During execution, leaf nodes (nodes with no outstanding
26 // dependencies) get executed in parallel, bounded by a semaphore
27 // sized according to the number of CPUs. Conceptually, this happens
28 // in a loop, processing new leaf nodes as they appear, until no more
29 // nodes are left. In the actual implementation, nodes know their
30 // dependents, and the last dependency of a node to be processed is
31 // responsible for scheduling its dependent.
33 // The graph is rooted at a synthetic root node. Upon execution of the
34 // root node, the algorithm terminates.
36 // Analyzing a package repeats the same planning + execution steps,
37 // but this time on a graph of analyzers for the package. Parallel
38 // execution of individual analyzers is bounded by the same semaphore
39 // as executing packages.
43 // Actions are executed in parallel where the dependency graph allows.
44 // Overall parallelism is bounded by a semaphore, sized according to
45 // GOMAXPROCS. Each concurrently processed package takes up a
46 // token, as does each analyzer – but a package can always execute at
47 // least one analyzer, using the package's token.
49 // Depending on the overall shape of the graph, there may be GOMAXPROCS
50 // packages running a single analyzer each, a single package running
51 // GOMAXPROCS analyzers, or anything in between.
53 // Total memory consumption grows roughly linearly with the number of
54 // CPUs, while total execution time is inversely proportional to the
55 // number of CPUs. Overall, parallelism is affected by the shape of
56 // the dependency graph. A lot of inter-connected packages will see
57 // less parallelism than a lot of independent packages.
61 // The runner caches facts, directives and diagnostics in a
62 // content-addressable cache that is designed after Go's own cache.
63 // Additionally, it makes use of Go's export data.
65 // This cache not only speeds up repeat runs, it also reduces peak
66 // memory usage. When we've analyzed a package, we cache the results
67 // and drop them from memory. When a dependent needs any of this
68 // information, or when analysis is complete and we wish to render the
69 // results, the data gets loaded from disk again.
71 // Data only exists in memory when it is immediately needed, not
72 // retained for possible future uses. This trades increased CPU usage
73 // for reduced memory usage. A single dependency may be loaded many
74 // times over, but it greatly reduces peak memory usage, as an
75 // arbitrary amount of time may pass between analyzing a dependency
76 // and its dependent, during which other packages will be processed.
79 // OPT(dh): we could reduce disk storage usage of cached data by
80 // compressing it, either directly at the cache layer, or by feeding
81 // compressed data to the cache. Of course doing so may negatively
82 // affect CPU usage, and there are lower hanging fruit, such as
83 // needing to cache less data in the first place.
85 // OPT(dh): right now, each package is analyzed completely
86 // independently. Each package loads all of its dependencies from
87 // export data and cached facts. If we have two packages A and B,
88 // which both depend on C, and which both get analyzed in parallel,
89 // then C will be loaded twice. This wastes CPU time and memory. It
90 // would be nice if we could reuse a single C for the analysis of both
93 // We can't reuse the actual types.Package or facts, because each
94 // package gets its own token.FileSet. Sharing a global FileSet has
95 // several drawbacks, including increased memory usage and running the
96 // risk of running out of FileSet address space.
98 // We could however avoid loading the same raw export data from disk
99 // twice, as well as deserializing gob data twice. One possible
100 // solution would be a duplicate-suppressing in-memory cache that
101 // caches data for a limited amount of time. When the same package
102 // needs to be loaded twice in close succession, we can reuse work,
103 // without holding unnecessary data in memory for an extended period
106 // We would likely need to do extensive benchmarking to figure out how
107 // long to keep data around to find a sweetspot where we reduce CPU
108 // load without increasing memory usage.
110 // We can probably populate the cache after we've analyzed a package,
111 // on the assumption that it will have to be loaded again in the near
129 "honnef.co/go/tools/analysis/lint"
130 "honnef.co/go/tools/analysis/report"
131 "honnef.co/go/tools/config"
132 "honnef.co/go/tools/go/loader"
133 "honnef.co/go/tools/internal/cache"
134 tsync "honnef.co/go/tools/internal/sync"
135 "honnef.co/go/tools/unused"
137 "golang.org/x/tools/go/analysis"
138 "golang.org/x/tools/go/packages"
139 "golang.org/x/tools/go/types/objectpath"
142 const sanityCheck = false
144 type Diagnostic struct {
145 Position token.Position
150 SuggestedFixed []SuggestedFix
151 Related []RelatedInformation
154 // RelatedInformation provides additional context for a diagnostic.
155 type RelatedInformation struct {
156 Position token.Position
161 type SuggestedFix struct {
166 type TextEdit struct {
167 Position token.Position
172 // A Result describes the result of analyzing a single package.
174 // It holds references to cached diagnostics and directives. They can
175 // be loaded on demand with Diagnostics and Directives respectively.
177 Package *loader.PackageSpec
184 // Action results, paths to files
188 type SerializedDirective struct {
191 // The position of the comment
192 DirectivePosition token.Position
193 // The position of the node that the comment is attached to
194 NodePosition token.Position
197 func serializeDirective(dir lint.Directive, fset *token.FileSet) SerializedDirective {
198 return SerializedDirective{
199 Command: dir.Command,
200 Arguments: dir.Arguments,
201 DirectivePosition: report.DisplayPosition(fset, dir.Directive.Pos()),
202 NodePosition: report.DisplayPosition(fset, dir.Node.Pos()),
206 type ResultData struct {
207 Directives []SerializedDirective
208 Diagnostics []Diagnostic
209 Unused unused.SerializedResult
212 func (r Result) Load() (ResultData, error) {
214 panic("Load called on failed Result")
217 // this package was only a dependency
218 return ResultData{}, nil
220 f, err := os.Open(r.results)
222 return ResultData{}, fmt.Errorf("failed loading result: %w", err)
226 err = gob.NewDecoder(f).Decode(&out)
230 type action interface {
233 DecrementPending() bool
239 type baseAction struct {
240 // Action description
248 // failed is set to true if the action couldn't be processed. This
249 // may either be due to an error specific to this action, in
250 // which case the errors field will be populated, or due to a
251 // dependency being marked as failed, in which case errors will be
257 func (act *baseAction) Deps() []action { return act.deps }
258 func (act *baseAction) Triggers() []action { return act.triggers }
259 func (act *baseAction) DecrementPending() bool {
260 return atomic.AddUint32(&act.pending, ^uint32(0)) == 0
262 func (act *baseAction) MarkFailed() { act.failed = true }
263 func (act *baseAction) IsFailed() bool { return act.failed }
264 func (act *baseAction) AddError(err error) { act.errors = append(act.errors, err) }
266 // packageAction describes the act of loading a package, fully
267 // analyzing it, and storing the results.
268 type packageAction struct {
271 // Action description
273 Package *loader.PackageSpec
285 func (act *packageAction) String() string {
286 return fmt.Sprintf("packageAction(%s)", act.Package)
289 type objectFact struct {
294 type objectFactKey struct {
299 type packageFactKey struct {
304 type gobFact struct {
310 // analyzerAction describes the act of analyzing a package with a
312 type analyzerAction struct {
315 // Action description
317 Analyzer *analysis.Analyzer
321 // We can store actual results here without worrying about memory
322 // consumption because analyzer actions get garbage collected once
323 // a package has been fully analyzed.
325 Diagnostics []Diagnostic
326 ObjectFacts map[objectFactKey]objectFact
327 PackageFacts map[packageFactKey]analysis.Fact
331 func (act *analyzerAction) String() string {
332 return fmt.Sprintf("analyzerAction(%s)", act.Analyzer)
335 // A Runner executes analyzers on packages.
340 // Config that gets merged with per-package configs
343 semaphore tsync.Semaphore
346 type subrunner struct {
348 analyzers []*analysis.Analyzer
349 factAnalyzers []*analysis.Analyzer
353 // New returns a new Runner.
354 func New(cfg config.Config) (*Runner, error) {
355 cache, err := cache.Default()
363 semaphore: tsync.NewSemaphore(runtime.GOMAXPROCS(0)),
367 func newSubrunner(r *Runner, analyzers []*analysis.Analyzer) *subrunner {
368 analyzerNames := make([]string, len(analyzers))
369 for i, a := range analyzers {
370 analyzerNames[i] = a.Name
372 sort.Strings(analyzerNames)
374 var factAnalyzers []*analysis.Analyzer
375 for _, a := range analyzers {
376 if len(a.FactTypes) > 0 {
377 factAnalyzers = append(factAnalyzers, a)
382 analyzers: analyzers,
383 factAnalyzers: factAnalyzers,
384 analyzerNames: strings.Join(analyzerNames, ","),
388 func newPackageActionRoot(pkg *loader.PackageSpec, cache map[*loader.PackageSpec]*packageAction) *packageAction {
389 a := newPackageAction(pkg, cache)
394 func newPackageAction(pkg *loader.PackageSpec, cache map[*loader.PackageSpec]*packageAction) *packageAction {
395 if a, ok := cache[pkg]; ok {
401 factsOnly: true, // will be overwritten by any call to Action
405 if len(pkg.Errors) > 0 {
406 a.errors = make([]error, len(pkg.Errors))
407 for i, err := range pkg.Errors {
412 // We don't need to process our imports if this package is
417 a.deps = make([]action, 0, len(pkg.Imports))
418 for _, dep := range pkg.Imports {
419 depa := newPackageAction(dep, cache)
420 depa.triggers = append(depa.triggers, a)
421 a.deps = append(a.deps, depa)
427 // sort dependencies because the list of dependencies is part of
429 sort.Slice(a.deps, func(i, j int) bool {
430 return a.deps[i].(*packageAction).Package.ID < a.deps[j].(*packageAction).Package.ID
433 a.pending = uint32(len(a.deps))
438 func newAnalyzerAction(an *analysis.Analyzer, cache map[*analysis.Analyzer]*analyzerAction) *analyzerAction {
439 if a, ok := cache[an]; ok {
443 a := &analyzerAction{
445 ObjectFacts: map[objectFactKey]objectFact{},
446 PackageFacts: map[packageFactKey]analysis.Fact{},
449 for _, dep := range an.Requires {
450 depa := newAnalyzerAction(dep, cache)
451 depa.triggers = append(depa.triggers, a)
452 a.deps = append(a.deps, depa)
454 a.pending = uint32(len(a.deps))
458 func getCachedFiles(cache *cache.Cache, ids []cache.ActionID, out []*string) error {
459 for i, id := range ids {
461 *out[i], _, err = cache.GetFile(id)
469 func (r *subrunner) do(act action) error {
470 a := act.(*packageAction)
472 r.Stats.finishPackage()
474 r.Stats.finishInitialPackage()
478 // compute hash of action
479 a.cfg = a.Package.Config.Merge(r.cfg)
480 h := cache.NewHash("staticcheck " + a.Package.PkgPath)
482 // Note that we do not filter the list of analyzers by the
483 // package's configuration. We don't allow configuration to
484 // accidentally break dependencies between analyzers, and it's
485 // easier to always run all checks and filter the output. This
486 // also makes cached data more reusable.
488 // OPT(dh): not all changes in configuration invalidate cached
489 // data. specifically, when a.factsOnly == true, we only care
490 // about checks that produce facts, and settings that affect those
493 // Config used for constructing the hash; this config doesn't have
494 // Checks populated, because we always run all checks.
497 // note that we don't hash staticcheck's version; it is set as the
498 // salt by a package main.
499 fmt.Fprintf(h, "cfg %#v\n", hashCfg)
500 fmt.Fprintf(h, "pkg %x\n", a.Package.Hash)
501 fmt.Fprintf(h, "analyzers %s\n", r.analyzerNames)
502 fmt.Fprintf(h, "go 1.%d\n", r.GoVersion)
504 // OPT(dh): do we actually need to hash vetx? can we not assume
505 // that for identical inputs, staticcheck will produce identical
507 for _, dep := range a.deps {
508 dep := dep.(*packageAction)
509 vetxHash, err := cache.FileHash(dep.vetx)
511 return fmt.Errorf("failed computing hash: %w", err)
513 fmt.Fprintf(h, "vetout %q %x\n", dep.Package.PkgPath, vetxHash)
515 a.hash = cache.ActionID(h.Sum())
517 // try to fetch hashed data
518 ids := make([]cache.ActionID, 0, 2)
519 ids = append(ids, cache.Subkey(a.hash, "vetx"))
521 ids = append(ids, cache.Subkey(a.hash, "results"))
523 if err := getCachedFiles(r.cache, ids, []*string{&a.vetx, &a.results}); err != nil {
524 result, err := r.doUncached(a)
532 a.skipped = result.skipped
534 // OPT(dh) instead of collecting all object facts and encoding
535 // them after analysis finishes, we could encode them as we
536 // go. however, that would require some locking.
538 // OPT(dh): We could sort gobFacts for more consistent output,
539 // but it doesn't matter. The hash of a package includes all
540 // of its files, so whether the vetx hash changes or not, a
541 // change to a package requires re-analyzing all dependents,
542 // even if the vetx data stayed the same. See also the note at
543 // the top of loader/hash.go.
544 tf, err := ioutil.TempFile("", "staticcheck")
551 enc := gob.NewEncoder(tf)
552 for _, gf := range result.facts {
553 if err := enc.Encode(gf); err != nil {
554 return fmt.Errorf("failed gob encoding data: %w", err)
558 if _, err := tf.Seek(0, io.SeekStart); err != nil {
561 a.vetx, err = r.writeCacheReader(a, "vetx", tf)
571 out.Directives = make([]SerializedDirective, len(result.dirs))
572 for i, dir := range result.dirs {
573 out.Directives[i] = serializeDirective(dir, result.lpkg.Fset)
576 out.Diagnostics = result.diags
577 out.Unused = result.unused
578 a.results, err = r.writeCacheGob(a, "results", out)
586 // ActiveWorkers returns the number of currently running workers.
587 func (r *Runner) ActiveWorkers() int {
588 return r.semaphore.Len()
591 // TotalWorkers returns the maximum number of possible workers.
592 func (r *Runner) TotalWorkers() int {
593 return r.semaphore.Cap()
596 func (r *Runner) writeCacheReader(a *packageAction, kind string, rs io.ReadSeeker) (string, error) {
597 h := cache.Subkey(a.hash, kind)
598 out, _, err := r.cache.Put(h, rs)
600 return "", fmt.Errorf("failed caching data: %w", err)
602 return r.cache.OutputFile(out), nil
605 func (r *Runner) writeCacheGob(a *packageAction, kind string, data interface{}) (string, error) {
606 f, err := ioutil.TempFile("", "staticcheck")
612 if err := gob.NewEncoder(f).Encode(data); err != nil {
613 return "", fmt.Errorf("failed gob encoding data: %w", err)
615 if _, err := f.Seek(0, io.SeekStart); err != nil {
618 return r.writeCacheReader(a, kind, f)
621 type packageActionResult struct {
624 unused unused.SerializedResult
625 dirs []lint.Directive
630 func (r *subrunner) doUncached(a *packageAction) (packageActionResult, error) {
631 // OPT(dh): for a -> b; c -> b; if both a and b are being
632 // processed concurrently, we shouldn't load b's export data
635 pkg, _, err := loader.Load(a.Package)
637 return packageActionResult{}, err
640 if len(pkg.Errors) > 0 {
641 // this handles errors that occured during type-checking the
642 // package in loader.Load
643 for _, err := range pkg.Errors {
644 a.errors = append(a.errors, err)
647 return packageActionResult{}, nil
650 if len(pkg.Syntax) == 0 && pkg.PkgPath != "unsafe" {
651 return packageActionResult{lpkg: pkg, skipped: true}, nil
654 // OPT(dh): instead of parsing directives twice (twice because
655 // U1000 depends on the facts.Directives analyzer), reuse the
657 var dirs []lint.Directive
659 dirs = lint.ParseDirectives(pkg.Syntax, pkg.Fset)
661 res, err := r.runAnalyzers(a, pkg)
663 return packageActionResult{
665 diags: res.diagnostics,
672 func pkgPaths(root *types.Package) map[string]*types.Package {
673 out := map[string]*types.Package{}
674 var dfs func(*types.Package)
675 dfs = func(pkg *types.Package) {
676 if _, ok := out[pkg.Path()]; ok {
679 out[pkg.Path()] = pkg
680 for _, imp := range pkg.Imports() {
688 func (r *Runner) loadFacts(root *types.Package, dep *packageAction, objFacts map[objectFactKey]objectFact, pkgFacts map[packageFactKey]analysis.Fact) error {
689 // Load facts of all imported packages
690 vetx, err := os.Open(dep.vetx)
692 return fmt.Errorf("failed loading cached facts: %w", err)
696 pathToPkg := pkgPaths(root)
697 dec := gob.NewDecoder(vetx)
700 err := dec.Decode(&gf)
705 return fmt.Errorf("failed loading cached facts: %w", err)
708 pkg, ok := pathToPkg[gf.PkgPath]
712 if gf.ObjPath == "" {
713 pkgFacts[packageFactKey{
715 Type: reflect.TypeOf(gf.Fact),
718 obj, err := objectpath.Object(pkg, objectpath.Path(gf.ObjPath))
722 objFacts[objectFactKey{
724 Type: reflect.TypeOf(gf.Fact),
725 }] = objectFact{gf.Fact, objectpath.Path(gf.ObjPath)}
731 func genericHandle(a action, root action, queue chan action, sem *tsync.Semaphore, exec func(a action) error) {
740 // the action may have already been marked as failed during
741 // construction of the action graph, for example because of
742 // unresolved imports.
744 for _, dep := range a.Deps() {
746 // One of our dependencies failed, so mark this package as
747 // failed and bail. We don't need to record an error for
748 // this package, the relevant error will have been
749 // reported by the first package in the chain that failed.
757 if err := exec(a); err != nil {
766 for _, t := range a.Triggers() {
767 if t.DecrementPending() {
773 type analyzerRunner struct {
775 // object facts of our dependencies; may contain facts of
776 // analyzers other than the current one
777 depObjFacts map[objectFactKey]objectFact
778 // package facts of our dependencies; may contain facts of
779 // analyzers other than the current one
780 depPkgFacts map[packageFactKey]analysis.Fact
786 func (ar *analyzerRunner) do(act action) error {
787 a := act.(*analyzerAction)
788 results := map[*analysis.Analyzer]interface{}{}
789 // TODO(dh): does this have to be recursive?
790 for _, dep := range a.deps {
791 dep := dep.(*analyzerAction)
792 results[dep.Analyzer] = dep.Result
794 // OPT(dh): cache factTypes, it is the same for all packages for a given analyzer
796 // OPT(dh): do we need the factTypes map? most analyzers have 0-1
797 // fact types. iterating over the slice is probably faster than
799 factTypes := map[reflect.Type]struct{}{}
800 for _, typ := range a.Analyzer.FactTypes {
801 factTypes[reflect.TypeOf(typ)] = struct{}{}
803 filterFactType := func(typ reflect.Type) bool {
804 _, ok := factTypes[typ]
807 a.Pass = &analysis.Pass{
808 Analyzer: a.Analyzer,
810 Files: ar.pkg.Syntax,
811 OtherFiles: ar.pkg.OtherFiles,
813 TypesInfo: ar.pkg.TypesInfo,
814 TypesSizes: ar.pkg.TypesSizes,
815 Report: func(diag analysis.Diagnostic) {
817 if diag.Category == "" {
818 diag.Category = a.Analyzer.Name
821 Position: report.DisplayPosition(ar.pkg.Fset, diag.Pos),
822 End: report.DisplayPosition(ar.pkg.Fset, diag.End),
823 Category: diag.Category,
824 Message: diag.Message,
826 for _, sugg := range diag.SuggestedFixes {
828 Message: sugg.Message,
830 for _, edit := range sugg.TextEdits {
831 s.TextEdits = append(s.TextEdits, TextEdit{
832 Position: report.DisplayPosition(ar.pkg.Fset, edit.Pos),
833 End: report.DisplayPosition(ar.pkg.Fset, edit.End),
834 NewText: edit.NewText,
837 d.SuggestedFixed = append(d.SuggestedFixed, s)
839 for _, rel := range diag.Related {
840 d.Related = append(d.Related, RelatedInformation{
841 Position: report.DisplayPosition(ar.pkg.Fset, rel.Pos),
842 End: report.DisplayPosition(ar.pkg.Fset, rel.End),
843 Message: rel.Message,
846 a.Diagnostics = append(a.Diagnostics, d)
850 ImportObjectFact: func(obj types.Object, fact analysis.Fact) bool {
851 key := objectFactKey{
853 Type: reflect.TypeOf(fact),
855 if f, ok := ar.depObjFacts[key]; ok {
856 reflect.ValueOf(fact).Elem().Set(reflect.ValueOf(f.fact).Elem())
858 } else if f, ok := a.ObjectFacts[key]; ok {
859 reflect.ValueOf(fact).Elem().Set(reflect.ValueOf(f.fact).Elem())
864 ImportPackageFact: func(pkg *types.Package, fact analysis.Fact) bool {
865 key := packageFactKey{
867 Type: reflect.TypeOf(fact),
869 if f, ok := ar.depPkgFacts[key]; ok {
870 reflect.ValueOf(fact).Elem().Set(reflect.ValueOf(f).Elem())
872 } else if f, ok := a.PackageFacts[key]; ok {
873 reflect.ValueOf(fact).Elem().Set(reflect.ValueOf(f).Elem())
878 ExportObjectFact: func(obj types.Object, fact analysis.Fact) {
879 key := objectFactKey{
881 Type: reflect.TypeOf(fact),
883 path, _ := objectpath.For(obj)
884 a.ObjectFacts[key] = objectFact{fact, path}
886 ExportPackageFact: func(fact analysis.Fact) {
887 key := packageFactKey{
889 Type: reflect.TypeOf(fact),
891 a.PackageFacts[key] = fact
893 AllPackageFacts: func() []analysis.PackageFact {
894 out := make([]analysis.PackageFact, 0, len(ar.depPkgFacts)+len(a.PackageFacts))
895 for key, fact := range ar.depPkgFacts {
896 out = append(out, analysis.PackageFact{
901 for key, fact := range a.PackageFacts {
902 out = append(out, analysis.PackageFact{
909 AllObjectFacts: func() []analysis.ObjectFact {
910 out := make([]analysis.ObjectFact, 0, len(ar.depObjFacts)+len(a.ObjectFacts))
911 for key, fact := range ar.depObjFacts {
912 if filterFactType(key.Type) {
913 out = append(out, analysis.ObjectFact{
919 for key, fact := range a.ObjectFacts {
920 if filterFactType(key.Type) {
921 out = append(out, analysis.ObjectFact{
932 res, err := a.Analyzer.Run(a.Pass)
933 ar.stats.measureAnalyzer(a.Analyzer, ar.pkg.PackageSpec, time.Since(t))
941 type analysisResult struct {
943 diagnostics []Diagnostic
944 unused unused.SerializedResult
947 func (r *subrunner) runAnalyzers(pkgAct *packageAction, pkg *loader.Package) (analysisResult, error) {
948 depObjFacts := map[objectFactKey]objectFact{}
949 depPkgFacts := map[packageFactKey]analysis.Fact{}
951 for _, dep := range pkgAct.deps {
952 if err := r.loadFacts(pkg.Types, dep.(*packageAction), depObjFacts, depPkgFacts); err != nil {
953 return analysisResult{}, err
957 root := &analyzerAction{}
958 var analyzers []*analysis.Analyzer
959 if pkgAct.factsOnly {
960 // When analyzing non-initial packages, we only care about
961 // analyzers that produce facts.
962 analyzers = r.factAnalyzers
964 analyzers = r.analyzers
967 all := map[*analysis.Analyzer]*analyzerAction{}
968 for _, a := range analyzers {
969 a := newAnalyzerAction(a, all)
970 root.deps = append(root.deps, a)
971 a.triggers = append(a.triggers, root)
973 root.pending = uint32(len(root.deps))
975 ar := &analyzerRunner{
977 factsOnly: pkgAct.factsOnly,
978 depObjFacts: depObjFacts,
979 depPkgFacts: depPkgFacts,
982 queue := make(chan action, len(all))
983 for _, a := range all {
984 if len(a.Deps()) == 0 {
989 // Don't hang if there are no analyzers to run; for example
990 // because we are analyzing a dependency but have no analyzers
991 // that produce facts.
995 for item := range queue {
996 b := r.semaphore.AcquireMaybe()
998 go genericHandle(item, root, queue, &r.semaphore, ar.do)
1000 // the semaphore is exhausted; run the analysis under the
1001 // token we've acquired for analyzing the package.
1002 genericHandle(item, root, queue, nil, ar.do)
1006 var unusedResult unused.SerializedResult
1007 for _, a := range all {
1008 if a != root && a.Analyzer.Name == "U1000" {
1009 // TODO(dh): figure out a clean abstraction, instead of
1010 // special-casing U1000.
1011 unusedResult = unused.Serialize(a.Pass, a.Result.(unused.Result), pkg.Fset)
1014 for key, fact := range a.ObjectFacts {
1015 depObjFacts[key] = fact
1017 for key, fact := range a.PackageFacts {
1018 depPkgFacts[key] = fact
1022 // OPT(dh): cull objects not reachable via the exported closure
1023 gobFacts := make([]gobFact, 0, len(depObjFacts)+len(depPkgFacts))
1024 for key, fact := range depObjFacts {
1025 if fact.path == "" {
1029 p, _ := objectpath.For(key.Obj)
1031 panic(fmt.Sprintf("got different object paths for %v. old: %q new: %q", key.Obj, fact.path, p))
1035 PkgPath: key.Obj.Pkg().Path(),
1036 ObjPath: string(fact.path),
1039 gobFacts = append(gobFacts, gf)
1041 for key, fact := range depPkgFacts {
1043 PkgPath: key.Pkg.Path(),
1046 gobFacts = append(gobFacts, gf)
1049 var diags []Diagnostic
1050 for _, a := range root.deps {
1051 a := a.(*analyzerAction)
1052 diags = append(diags, a.Diagnostics...)
1054 return analysisResult{
1057 unused: unusedResult,
1061 func registerGobTypes(analyzers []*analysis.Analyzer) {
1062 for _, a := range analyzers {
1063 for _, typ := range a.FactTypes {
1064 // FIXME(dh): use RegisterName so we can work around collisions
1065 // in names. For pointer-types, gob incorrectly qualifies
1066 // type names with the package name, not the import path.
1072 func allAnalyzers(analyzers []*analysis.Analyzer) []*analysis.Analyzer {
1073 seen := map[*analysis.Analyzer]struct{}{}
1074 out := make([]*analysis.Analyzer, 0, len(analyzers))
1075 var dfs func(*analysis.Analyzer)
1076 dfs = func(a *analysis.Analyzer) {
1077 if _, ok := seen[a]; ok {
1080 seen[a] = struct{}{}
1081 out = append(out, a)
1082 for _, dep := range a.Requires {
1086 for _, a := range analyzers {
1092 // Run loads the packages specified by patterns, runs analyzers on
1093 // them and returns the results. Each result corresponds to a single
1094 // package. Results will be returned for all packages, including
1095 // dependencies. Errors specific to packages will be reported in the
1096 // respective results.
1098 // If cfg is nil, a default config will be used. Otherwise, cfg will
1099 // be used, with the exception of the Mode field.
1101 // Run can be called multiple times on the same Runner and it is safe
1102 // for concurrent use. All runs will share the same semaphore.
1103 func (r *Runner) Run(cfg *packages.Config, analyzers []*analysis.Analyzer, patterns []string) ([]Result, error) {
1104 analyzers = allAnalyzers(analyzers)
1105 registerGobTypes(analyzers)
1107 for _, a := range analyzers {
1108 flag := a.Flags.Lookup("go")
1112 // OPT(dh): this is terrible
1113 flag.Value.Set(fmt.Sprintf("1.%d", r.GoVersion))
1116 r.Stats.setState(StateLoadPackageGraph)
1117 lpkgs, err := loader.Graph(cfg, patterns...)
1121 r.Stats.setInitialPackages(len(lpkgs))
1123 if len(lpkgs) == 0 {
1127 r.Stats.setState(StateBuildActionGraph)
1128 all := map[*loader.PackageSpec]*packageAction{}
1129 root := &packageAction{}
1130 for _, lpkg := range lpkgs {
1131 a := newPackageActionRoot(lpkg, all)
1132 root.deps = append(root.deps, a)
1133 a.triggers = append(a.triggers, root)
1135 root.pending = uint32(len(root.deps))
1137 queue := make(chan action)
1138 r.Stats.setTotalPackages(len(all) - 1)
1140 r.Stats.setState(StateProcessing)
1142 for _, a := range all {
1143 if len(a.Deps()) == 0 {
1149 sr := newSubrunner(r, analyzers)
1150 for item := range queue {
1151 r.semaphore.Acquire()
1152 go genericHandle(item, root, queue, &r.semaphore, func(act action) error {
1157 r.Stats.setState(StateFinalizing)
1158 out := make([]Result, 0, len(all))
1159 for _, item := range all {
1160 if item.Package == nil {
1163 out = append(out, Result{
1164 Package: item.Package,
1166 Initial: !item.factsOnly,
1167 Skipped: item.skipped,
1168 Failed: item.failed,
1169 Errors: item.errors,
1170 results: item.results,